Filter This month

Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Goal retention discussion with Eliezer

55 MaxTegmark 04 September 2014 10:23PM

Although I feel that Nick Bostrom’s new book “Superintelligence” is generally awesome and a well-needed milestone for the field, I do have one quibble: both he and Steve Omohundro appear to be more convinced than I am by the assumption that an AI will naturally tend to retain its goals as it reaches a deeper understanding of the world and of itself. I’ve written a short essay on this issue from my physics perspective, available at http://arxiv.org/pdf/1409.0813.pdf.

Eliezer Yudkowsky just sent the following extremely interesting comments, and told me he was OK with me sharing them here to spur a broader discussion of these issues, so here goes.

On Sep 3, 2014, at 17:21, Eliezer Yudkowsky <yudkowsky@gmail.com> wrote:

Hi Max!  You're asking the right questions.  Some of the answers we can
give you, some we can't, few have been written up and even fewer in any
well-organized way.  Benja or Nate might be able to expound in more detail
while I'm in my seclusion.

Very briefly, though:
The problem of utility functions turning out to be ill-defined in light of
new discoveries of the universe is what Peter de Blanc named an
"ontological crisis" (not necessarily a particularly good name, but it's
what we've been using locally).

http://intelligence.org/files/OntologicalCrises.pdf

The way I would phrase this problem now is that an expected utility
maximizer makes comparisons between quantities that have the type
"expected utility conditional on an action", which means that the AI's
utility function must be something that can assign utility-numbers to the
AI's model of reality, and these numbers must have the further property
that there is some computationally feasible approximation for calculating
expected utilities relative to the AI's probabilistic beliefs.  This is a
constraint that rules out the vast majority of all completely chaotic and
uninteresting utility functions, but does not rule out, say, "make lots of
paperclips".

Models also have the property of being Bayes-updated using sensory
information; for the sake of discussion let's also say that models are
about universes that can generate sensory information, so that these
models can be probabilistically falsified or confirmed.  Then an
"ontological crisis" occurs when the hypothesis that best fits sensory
information corresponds to a model that the utility function doesn't run
on, or doesn't detect any utility-having objects in.  The example of
"immortal souls" is a reasonable one.  Suppose we had an AI that had a
naturalistic version of a Solomonoff prior, a language for specifying
universes that could have produced its sensory data.  Suppose we tried to
give it a utility function that would look through any given model, detect
things corresponding to immortal souls, and value those things.  Even if
the immortal-soul-detecting utility function works perfectly (it would in
fact detect all immortal souls) this utility function will not detect
anything in many (representations of) universes, and in particular it will
not detect anything in the (representations of) universes we think have
most of the probability mass for explaining our own world.  In this case
the AI's behavior is undefined until you tell me more things about the AI;
an obvious possibility is that the AI would choose most of its actions
based on low-probability scenarios in which hidden immortal souls existed
that its actions could affect.  (Note that even in this case the utility
function is stable!)

Since we don't know the final laws of physics and could easily be
surprised by further discoveries in the laws of physics, it seems pretty
clear that we shouldn't be specifying a utility function over exact
physical states relative to the Standard Model, because if the Standard
Model is even slightly wrong we get an ontological crisis.  Of course
there are all sorts of extremely good reasons we should not try to do this
anyway, some of which are touched on in your draft; there just is no
simple function of physics that gives us something good to maximize.  See
also Complexity of Value, Fragility of Value, indirect normativity, the
whole reason for a drive behind CEV, and so on.  We're almost certainly
going to be using some sort of utility-learning algorithm, the learned
utilities are going to bind to modeled final physics by way of modeled
higher levels of representation which are known to be imperfect, and we're
going to have to figure out how to preserve the model and learned
utilities through shifts of representation.  E.g., the AI discovers that
humans are made of atoms rather than being ontologically fundamental
humans, and furthermore the AI's multi-level representations of reality
evolve to use a different sort of approximation for "humans", but that's
okay because our utility-learning mechanism also says how to re-bind the
learned information through an ontological shift.

This sorta thing ain't going to be easy which is the other big reason to
start working on it well in advance.  I point out however that this
doesn't seem unthinkable in human terms.  We discovered that brains are
made of neurons but were nonetheless able to maintain an intuitive grasp
on what it means for them to be happy, and we don't throw away all that
info each time a new physical discovery is made.  The kind of cognition we
want does not seem inherently self-contradictory.

Three other quick remarks:

*)  Natural selection is not a consequentialist, nor is it the sort of
consequentialist that can sufficiently precisely predict the results of
modifications that the basic argument should go through for its stability.
The Omohundrian/Yudkowskian argument is not that we can take an arbitrary
stupid young AI and it will be smart enough to self-modify in a way that
preserves its values, but rather that most AIs that don't self-destruct
will eventually end up at a stable fixed-point of coherent
consequentialist values.  This could easily involve a step where, e.g., an
AI that started out with a neural-style delta-rule policy-reinforcement
learning algorithm, or an AI that started out as a big soup of
self-modifying heuristics, is "taken over" by whatever part of the AI
first learns to do consequentialist reasoning about code.  But this
process doesn't repeat indefinitely; it stabilizes when there's a
consequentialist self-modifier with a coherent utility function that can
precisely predict the results of self-modifications.  The part where this
does happen to an initial AI that is under this threshold of stability is
a big part of the problem of Friendly AI and it's why MIRI works on tiling
agents and so on!

*)  Natural selection is not a consequentialist, nor is it the sort of
consequentialist that can sufficiently precisely predict the results of
modifications that the basic argument should go through for its stability.
It built humans to be consequentialists that would value sex, not value
inclusive genetic fitness, and not value being faithful to natural
selection's optimization criterion.  Well, that's dumb, and of course the
result is that humans don't optimize for inclusive genetic fitness.
Natural selection was just stupid like that.  But that doesn't mean
there's a generic process whereby an agent rejects its "purpose" in the
light of exogenously appearing preference criteria.  Natural selection's
anthropomorphized "purpose" in making human brains is just not the same as
the cognitive purposes represented in those brains.  We're not talking
about spontaneous rejection of internal cognitive purposes based on their
causal origins failing to meet some exogenously-materializing criterion of
validity.  Our rejection of "maximize inclusive genetic fitness" is not an
exogenous rejection of something that was explicitly represented in us,
that we were explicitly being consequentialists for.  It's a rejection of
something that was never an explicitly represented terminal value in the
first place.  Similarly the stability argument for sufficiently advanced
self-modifiers doesn't go through a step where the successor form of the
AI reasons about the intentions of the previous step and respects them
apart from its constructed utility function.  So the lack of any universal
preference of this sort is not a general obstacle to stable
self-improvement.

*)   The case of natural selection does not illustrate a universal
computational constraint, it illustrates something that we could
anthropomorphize as a foolish design error.  Consider humans building Deep
Blue.  We built Deep Blue to attach a sort of default value to queens and
central control in its position evaluation function, but Deep Blue is
still perfectly able to sacrifice queens and central control alike if the
position reaches a checkmate thereby.  In other words, although an agent
needs crystallized instrumental goals, it is also perfectly reasonable to
have an agent which never knowingly sacrifices the terminally defined
utilities for the crystallized instrumental goals if the two conflict;
indeed "instrumental value of X" is simply "probabilistic belief that X
leads to terminal utility achievement", which is sensibly revised in the
presence of any overriding information about the terminal utility.  To put
it another way, in a rational agent, the only way a loose generalization
about instrumental expected-value can conflict with and trump terminal
actual-value is if the agent doesn't know it, i.e., it does something that
it reasonably expected to lead to terminal value, but it was wrong.

This has been very off-the-cuff and I think I should hand this over to
Nate or Benja if further replies are needed, if that's all right.

[meta] New LW moderator: Viliam_Bur

35 Kaj_Sotala 13 September 2014 01:37PM

Some time back, I wrote that I was unwilling to continue with investigations into mass downvoting, and asked people for suggestions on how to deal with them from now on. The top-voted proposal in that thread suggested making Viliam_Bur into a moderator, and Viliam gracefully accepted the nomination. So I have given him moderator privileges and also put him in contact with jackk, who provided me with the information necessary to deal with the previous cases. Future requests about mass downvote investigations should be directed to Viliam.

Thanks a lot for agreeing to take up this responsibility, Viliam! It's not an easy one, but I'm very grateful that you're willing to do it. Please post a comment here so that we can reward you with some extra upvotes. :)

Hal Finney has just died.

32 cousin_it 28 August 2014 07:39PM

Talking to yourself: A useful thinking tool that seems understudied and underdiscussed

29 chaosmage 09 September 2014 04:56PM

I have returned from a particularly fruitful Google search, with unexpected results.

My question was simple. I was pretty sure that talking to myself aloud makes me temporarily better at solving problems that need a lot of working memory. It is a thinking tool that I find to be of great value, and that I imagine would be of interest to anyone who'd like to optimize their problem solving. I just wanted to collect some evidence on that, make sure I'm not deluding myself, and possibly learn how to enhance the effect.

This might be just lousy Googling on my part, but the evidence is surprisingly unclear and disorganized. There are at least three seperate Wiki pages for it. They don't link to each other. Instead they present the distinct models of three seperate fields: autocommunication in communication studies, semiotics and other cultural studies, intrapersonal communication ("self-talk" redirects here) in anthropology and (older) psychology and private speech in developmental psychology. The first is useless for my purpose, the second mentions "may increase concentration and retention" with no source, the third confirms my suspicion that this behavior boosts memory, motivation and creativity, but it only talks about children.

Google Scholar yields lots of sports-related results for "self-talk" because it can apparently improve the performance of athletes and if there's something that obviously needs the optimization power of psychology departments, it is competitive sports. For "intrapersonal communication" it has papers indicating it helps in language acquisition and in dealing with social anxiety. Both are dwarfed by the results for "private speech", which again focus on children. There's very little on "autocommunication" and what is there has nothing to do with the functioning of individual minds.

So there's a bunch of converging pieces of evidence supporting the usefulness of this behavior, but they're from several seperate fields that don't seem to have noticed each other very much. How often do you find that?

Let me quickly list a few ways that I find it plausible to imagine talking to yourself could enhance rational thought.

  • It taps the phonological loop, a distinct part of working memory that might otherwise sit idle in non-auditory tasks. More memory is always better, right?
  • Auditory information is retained more easily, so making thoughts auditory helps remember them later.
  • It lets you commit to thoughts, and build upon them, in a way that is more powerful (and slower) than unspoken thought while less powerful (but quicker) than action. (I don't have a good online source for this one, but Inside Jokes should convince you, and has lots of new cognitive science to boot.)
  • System 1 does seem to understand language, especially if it does not use complex grammar - so this might be a useful way for results of System 2 reasoning to be propagated. Compare affirmations. Anecdotally, whenever I'm starting a complex task, I find stating my intent out loud makes a huge difference in how well the various submodules of my mind cooperate.
  • It lets separate parts of your mind communicate in a fairly natural fashion, slows each of them down to the speed of your tongue and makes them not interrupt each other so much. (This is being used as a psychotherapy method.) In effect, your mouth becomes a kind of talking stick in their discussion.

All told, if you're talking to yourself you should be more able to solve complex problems than somebody of your IQ who doesn't, although somebody of your IQ with a pen and a piece of paper should still outthink both of you.

Given all that, I'm surprised this doesn't appear to have been discussed on LessWrong. Honesty: Beyond Internal Truth comes close but goes past it. Again, this might be me failing to use a search engine, but I think this is worth more of our attention that it has gotten so far.

I'm now almost certain talking to myself is useful, and I already find hindsight bias trying to convince me I've always been so sure. But I wasn't - I was suspicious because talking to yourself is an early warning sign of schizophrenia, and is frequent in dementia. But in those cases, it might simply be an autoregulatory response to failing working memory, not a pathogenetic element. After all, its memory enhancing effect is what the developmental psychologists say the kids use it for. I do expect social stigma, which is why I avoid talking to myself when around uninvolved or unsympathetic people, but my solving of complex problems tends to happen away from those anyway so that hasn't been an issue really.

So, what do you think? Useful?

Fighting Biases and Bad Habits like Boggarts

29 palladias 21 August 2014 05:07PM

TL;DR: Building humor into your habits for spotting and correcting errors makes the fix more enjoyable, easier to talk about and receive social support, and limits the danger of a contempt spiral. 

 

One of the most reliably bad decisions I've made on a regular basis is the choice to stay awake (well, "awake") and on the internet past the point where I can get work done, or even have much fun.  I went through a spell where I even fell asleep on the couch more nights than not, unable to muster the will or judgement to get up and go downstairs to bed.

I could remember (even sometimes in the moment) that this was a bad pattern, but, the more tired I was, the more tempting it was to think that I should just buckle down and apply more willpower to be more awake and get more out of my computer time.  Going to bed was a solution, but it was hard for it not to feel (to my sleepy brain and my normal one) like a bit of a cop out.

Only two things helped me really keep this failure mode in check.  One was setting a hard bedtime (and beeminding it) as part of my sacrifice for Advent.   But the other key tool (which has lasted me long past Advent) is the gif below.

sleep eating ice cream

The poor kid struggling to eat his ice cream cone, even in the face of his exhaustion, is hilarious.  And not too far off the portrait of me around 2am scrolling through my Feedly.

Thinking about how stupid or ineffective or insufficiently strong-willed I'm being makes it hard for me to do anything that feels like a retreat from my current course of action.  I want to master the situation and prove I'm stronger.  But catching on to the fact that my current situation (of my own making or not) is ridiculous, makes it easier to laugh, shrug, and move on.

I think the difference is that it's easy for me to feel contemptuous of myself when frustrated, and easy to feel fond when amused.

I've tried to strike the new emotional tone when I'm working on catching and correcting other errors.  (e.g "Stupid, you should have known to leave more time to make the appointment!  Planning fallacy!"  becomes "Heh, I guess you thought that adding two "trivially short" errands was a closed set, and must remain 'trivially short.'  That's a pretty silly error.")

In the first case, noticing and correcting an error feels punitive, since it's quickly followed by a hefty dose of flagellation, but the second comes with a quick laugh and a easier shift to a growth mindset framing.  Funny stories about errors are also easier to tell, increasing the chance my friends can help catch me out next time, or that I'll be better at spotting the error just by keeping it fresh in my memory. Not to mention, in order to get the joke, I tend to look for a more specific cause of the error than stupid/lazy/etc.

As far as I can tell, it also helps that amusement is a pretty different feeling than the ones that tend to be active when I'm falling into error (frustration, anger, feeling trapped, impatience, etc).  So, for a couple of seconds at least, I'm out of the rut and now need to actively return to it to stay stuck. 

In the heat of the moment of anger/akrasia/etc is a bad time to figure out what's funny, but, if you're reflecting on your errors after the fact, in a moment of consolation, it's easier to go back armed with a helpful reframing, ready to cast Riddikulus!

 

Crossposted from my personal blog, Unequally Yoked.

Unpopular ideas attract poor advocates: Be charitable

27 mushroom 15 September 2014 07:30PM

Unfamiliar or unpopular ideas will tend to reach you via proponents who:

  •  ...hold extreme interpretations of these ideas.
  • ...have unpleasant social characteristics.
  • ...generally come across as cranks.

The basic idea: It's unpleasant to promote ideas that result in social sanction, and frustrating when your ideas are met with indifference. Both situations are more likely when talking to an ideological out-group. Given a range of positions on an in-group belief, who will decide to promote the belief to outsiders? On average, it will be those who believe the benefits of the idea are large relative to in-group opinion (extremists), those who view the social costs as small (disagreeable people), and those who are dispositionally drawn to promoting weird ideas (cranks).

I don't want to push this pattern too far. This isn't a refutation of any particular idea. There are reasonable people in the world, and some of them even express their opinions in public, (in spite of being reasonable). And sometimes the truth will be unavoidably unfamiliar and unpopular, etc. But there are also...

Some benefits that stem from recognizing these selection effects:

  • It's easier to be charitable to controversial ideas, when you recognize that you're interacting with people who are terribly suited to persuade you. I'm not sure "steelmanning" is the best idea (trying to present the best argument for an opponent's position). Based on the extremity effect, another technique is to construct a much diluted version of the belief, and then try to steelman the diluted belief.
  • If your group holds fringe or unpopular ideas, you can avoid these patterns when you want to influence outsiders.
  • If you want to learn about an afflicted issue, you might ignore the public representatives and speak to the non-evangelical instead (you'll probably have to start the conversation).
  • You can resist certain polarizing situations, in which the most visible camps hold extreme and opposing views. This situation worsens when those with non-extreme views judge the risk of participation as excessive, and leave the debate to the extremists (who are willing to take substantial risks for their beliefs). This leads to the perception that the current camps represent the only valid positions, which creates a polarizing loop. Because this is a sort of coordination failure among non-extremists, knowing to covertly look for other non-vocal moderates is a first step toward a solution. (Note: Sometimes there really aren't any moderates.)
  • Related to the previous point: You can avoid exaggerating the ideological unity of a group based on the group's leadership, or believing that the entire group has some obnoxious trait present in the leadership. (Note: In things like elections and war, the views of the leadership are what you care about. But you still don't want to be confused about other group members.)

 

I think the first benefit listed is the most useful.

To sum up: An unpopular idea will tend to get poor representation for social reasons, which will makes it seem like a worse idea than it really is, even granting that many unpopular ideas are unpopular for good reason. So when you encounter a idea that seem unpopular, you're probably hearing about it from a sub-optimal source, and you should try to be charitable towards the idea before dismissing it.

Changes to my workflow

27 paulfchristiano 26 August 2014 05:29PM

About 18 months ago I made a post here on my workflow. I've received a handful of requests for follow-up, so I thought I would make another post detailing changes since then. I expect this post to be less useful than the last one.

For the most part, the overall outline has remained pretty stable and feels very similar to 18 months ago. Things not mentioned below have mostly stayed the same. I believe that the total effect of continued changes have been continued but much smaller improvements, though it is hard to tell (as opposed to the last changes, which were more clearly improvements).

Based on comparing time logging records I seem to now do substantially more work on average, but there are many other changes during this period that could explain the change (including changes in time logging). Changes other than work output are much harder to measure; I feel like they are positive but I wouldn't be surprised if this were an illusion.

Splitting days:

I now regularly divide my day into two halves, and treat the two halves as separate units. I plan each separately and reflect on each separately. I divide them by an hour long period of reflecting on the morning, relaxing for 5-10 minutes, napping for 25-30 minutes, processing my emails, and planning the evening. I find that this generally makes me more productive and happier about the day. Splitting my days is often difficult due to engagements in the middle of the day, and I don't have a good solution to that.

WasteNoTime:

I have longstanding objections to explicitly rationing internet use (since it seems either indicative of a broader problem that should be resolved directly, or else to serve a useful function that would be unwise to remove). That said, I now use the extension WasteNoTime to limit my consumption of blogs, webcomics, facebook, news sites, browser games, etc., to 10 minutes each half-day. This has cut the amount of time I spend browsing the internet from an average of 30-40 minutes to an average of 10-15 minutes. It doesn't seem to have been replaced by lower-quality leisure, but by a combination of work and higher-quality leisure.

Similarly, I turned off the newsfeed in facebook, which I found to improve the quality of my internet time in general (the primary issue was that I would sometimes be distracted by the newsfeed while sending messages over facebook, which wasn't my favorite way to use up wastenotime minutes).

I also tried StayFocusd, but ended up adopting WasteNoTime because of the ability to set limits per half-day (via "At work" and "not at work" timers) rather than per-day. I find that the main upside is cutting off the tail of derping (e.g. getting sucked into a blog comment thread, or looking into a particularly engrossing issue), and for this purpose per half-day timers are much more effective.

Email discipline:

I set gmail to archive all emails on arrival and assign them the special label "In." This lets me to search for emails and compose emails, using the normal gmail interface, without being notified of new arrivals. I process the items with label "in" (typically turning emails into todo items to be processed by the same system that deals with other todo items) at the beginning of each half day. Each night I scan my email quickly for items that require urgent attention. 

Todo lists / reminders:

I continue to use todo lists for each half day and for a range of special conditions. I now check these lists at the beginning of each half day rather than before going to bed.

I also maintain a third list of "reminders." These are things that I want to be reminded of periodically, organized by day; each morning I look at the day's reminders and think about them briefly. Each of them is copied and filed under a future day. If I feel like I remember a thing well I file it in far in the future, if I feel like I don't remember it well I file it in the near future.

Over the last month most of these reminders have migrated to be in the form "If X, then Y," e.g. "If I agree to do something for someone, then pause, say `actually I should think about it for a few minutes to make sure I have time,' and set a 5 minute timer that night to think about it more clearly." These are designed to fix problems that I notice when reflecting on the day. This is a recommendation from CFAR folks, which seems to be working well, though is the newest part of the system and least tested.

Isolating "todos":

I now attempt to isolate things that probably need doing, but don't seem maximally important; I aim to do them only on every 5th day, and only during one half-day. If I can't finish them in this time, I will typically delay them 5 days. When they spill over to other days, I try to at least keep them to one half-day or the other. I don't know if this helps, but it feels better to have isolated unproductive-feeling blocks of time rather than scattering it throughout the week.

I don't do this very rigidly. I expect the overall level of discipline I have about it is comparable to or lower than a normal office worker who has a clearer division between their personal time and work time.

Toggl:

I now use Toggl for detailed time tracking. Katja Grace and I experimented with about half a dozen other systems (Harvest, Yast, Klok, Freckle, Lumina, I expect others I'm forgetting) before settling on Toggl. It has a depressing number of flaws, but ends up winning for me by making it very fast to start and switch timers which is probably the most important criterion for me. It also offers reviews that work out well with what I want to look at.

I find the main value adds from detailed time tracking are:

1. Knowing how long I've spent on projects, especially long-term projects. My intuitive estimates are often off by more than a factor of 2, even for things taking 80 hours; this can lead me to significantly underestimate the costs of taking on some kinds of projects, and it can also lead me to think an activity is unproductive instead of productive by overestimating how long I've actually spent on it.

2. Accurate breakdowns of time in a day, which guide efforts at improving my day-to-day routine. They probably also make me feel more motivated about working, and improve focus during work.

Reflection / improvement:

Reflection is now a smaller fraction of my time, down from 10% to 3-5%, based on diminishing returns to finding stuff to improve. Another 3-5% is now redirected into longer-term projects to improve particular aspects of my life (I maintain a list of possible improvements, roughly sorted by goodness). Examples: buying new furniture, improvements to my diet (Holden's powersmoothie is great), improvements to my sleep (low doses of melatonin seem good). At the moment the list of possible improvements is long enough that adding to the list is less valuable than doing things on the list.

I have equivocated a lot about how much of my time should go into this sort of thing. My best guess is the number should be higher.

-Pomodoros:

I don't use pomodoros at all any more. I still have periods of uninterrupted work, often of comparable length, for individual tasks. This change wasn't extremely carefully considered, it mostly just happened. I find explicit time logging (such that I must consciously change the timer before changing tasks) seems to work as a substitute in many cases. I also maintain the habit of writing down candidate distractions and then attending to them later (if at all).

For larger tasks I find that I often prefer longer blocks of unrestricted working time. I continue to use Alinof timer to manage these blocks of uninterrupted work.

-Catch:

Catch disappeared, and I haven't found a replacement that I find comparably useful. (It's also not that high on the list of priorities.) I now just send emails to myself, but I do it much less often.

-Beeminder:

I no longer use beeminder. This again wasn't super-considered, though it was based on a very rough impression of overhead being larger than the short-term gains. I think beeminder was helpful for setting up a number of habits which have persisted (especially with respect to daily routine and regular focused work), and my long-term averages continue to satisfy my old beeminder goals.

Project outlines:

I now organize notes about each project I am working on in a more standardized way, with "Queue of todos," "Current workspace," and "Data" as the three subsections. I'm not thrilled by this system, but it seems to be an improvement over the previous informal arrangement. In particular, having a workspace into which I can easily write thoughts without thinking about where they fit, and only later sorting them into the data section once it's clearer how they fit in, decreases the activation energy of using the system. I now use Toggl rather than maintaining time logs by hand.

Randomized trials:

As described in my last post I tried various randomized trials (esp. of effects of exercise, stimulant use, and sleep on mood, cognitive performance, and productive time). I have found extracting meaningful data from these trials to be extremely difficult, due to straightforward issues with signal vs. noise. There are a number of tests which I still do expect to yield meaningful data, but I've increased my estimates for the expensiveness of useful tests substantially, and they've tended to fall down the priority list. For some things I've just decided to do them without the data, since my best guess is positive in expectation and the data is too expensive to acquire.

 

Announcing The Effective Altruism Forum

27 RyanCarey 24 August 2014 08:07AM

The Effective Altruism Forum will be launched at effective-altruism.com on September 10, British time.

Now seems like a good time time to discuss why we might need an Effective Altruism Forum, and how it might compare to LessWrong.

About the Effective Altruism Forum

The motivation for the Effective Altruism Forum is to improve the quality of effective altruist discussion and coordination. A big part of this is to give many of the useful features of LessWrong to effective altruists, including:

 

  • Archived, searchable content (this will begin with archived content from effective-altruism.com)
  • Meetups
  • Nested comments
  • A karma system
  • A dynamically upated list of external effective altruist blogs
  • Introductory materials (this will begin with these articles)

 

The Effective Altruism Forum has been designed by Mihai Badic. Over the last month, it has been developed by Trike Apps, who have built the new site using the LessWrong codebase. I'm glad to report that it is now basically ready, looks nice, and is easy to use.

I expect that at the new forum, as on the effective altruist Facebook and Reddit pages, people will want to discuss the which intellectual procedures to use to pick effective actions. I also expect some proposals of effective altruist projects, and offers of resources. So users of the new forum will share LessWrong's interest in instrumental and epistemic rationality. On the other hand, I expect that few of its users will want to discuss the technical aspects of artificial intelligence, anthropics or decision theory, and to the extent that they do so, they will want to do it at LessWrong. As a result, I  expect the new forum to cause:

 

  • A bunch of materials on effective altruism and instrumental rationality to be collated for new effective altruists
  • Discussion of old LessWrong materials to resurface
  • A slight increase to the number of users of LessWrong, possibly offset by some users spending more of their time posting at the new forum.

 

At least initially, the new forum won't have a wiki or a Main/Discussion split and won't have any institutional affiliations.

Next Steps:

It's really important to make sure that the Effective Altruism Forum is established with a beneficial culture. If people want to help that process by writing some seed materials, to be posted around the time of the site's launch, then they can contact me at ry [dot] duff [at] gmail.com. Alternatively, they can wait a short while until they automatically receive posting priveleges.

It's also important that the Effective Altruism Forum helps the shared goals of rationalists and effective altruists, and has net positive effects on LessWrong in particular. Any suggestions for improving the odds of success for the effective altruism forum are most welcome.

Funding cannibalism motivates concern for overheads

24 Thrasymachus 30 August 2014 12:42AM

Summary: Overhead expenses' (CEO salary, percentage spent on fundraising) are often deemed a poor measure of charity effectiveness by Effective Altruists, and so they disprefer means of charity evaluation which rely on these. However, 'funding cannibalism' suggests that these metrics (and the norms that engender them) have value: if fundraising is broadly a zero-sum game between charities, then there's a commons problem where all charities could spend less money on fundraising and all do more good, but each is locally incentivized to spend more. Donor norms against increasing spending on zero-sum 'overheads' might be a good way of combating this. This valuable collective action of donors may explain the apparent underutilization of fundraising by charities, and perhaps should make us cautious in undermining it.

The EA critique of charity evaluation

Pre-Givewell, the common means of evaluating charities (GuidestarCharity Navigator) used a mixture of governance checklists 'overhead indicators'. Charities would gain points both for having features associated with good governance (being transparent in the right ways, balancing budgets, the right sorts of corporate structure), but also in spending its money on programs and avoiding 'overhead expenses' like administration and (especially) fundraising. For shorthand, call this 'common sense' evaluation.

The standard EA critique is that common sense evaluation doesn't capture what is really important: outcomes. It is easy to imagine charities that look really good to common sense evaluation yet have negligible (or negative) outcomes.  In the case of overheads, it becomes unclear whether these are even proxy measures of efficacy. Any fundraising that still 'turns a profit' looks like a good deal, whether it comprises five percent of a charity's spending or fifty.

A summary of the EA critique of common sense evaluation that its myopic focus on these metrics gives pathological incentives, as these metrics frequently lie anti-parallel to maximizing efficacy. To score well on these evaluations, charities may be encouraged to raise less money, hire less able staff, and cut corners in their own management, even if doing these things would be false economies.

 

Funding cannibalism and commons tragedies

In the wake of the ALS 'Ice bucket challenge', Will MacAskill suggested there is considerable of 'funding cannabilism' in the non-profit sector. Instead of the Ice bucket challenge 'raising' money for ALS, it has taken money that would have been donated to other causes instead - cannibalizing other causes. Rather than each charity raising funds independently of one another, they compete for a fairly fixed pie of aggregate charitable giving.

The 'cannabilism' thesis is controversial, but looks plausible to me, especially when looking at 'macro' indicators: proportion of household charitable spending looks pretty fixed whilst fundraising has increased dramatically, for example.

If true, cannibalism is important. As MacAskill points out, the money tens of millions of dollars raised for ALS is no longer an untrammelled good, alloyed as it is with the opportunity cost of whatever other causes it has cannibalized (q.v.). There's also a more general consideration: if there is a fixed pot of charitable giving insensitive to aggregate fundraising, then fundraising becomes a commons problem. If all charities could spend less on their fundraising, none would lose out, so all could spend more of their funds on their programs. However, for any alone to spend less on fundraising allows the others to cannibalize it.

 

Civilizing Charitable Cannibals, and Metric Meta-Myopia

Coordination among charities to avoid this commons tragedy is far fetched. Yet coordination of  donors on shared norms about 'overhead ratio' can help. By penalizing a charity for spending too much on zero-sum games with other charities like fundraising, donors can stop a race to the bottom fundraising free for all and burning of the charitable commons that implies. The apparently-high marginal return to fundraising might suggest this is already in effect (and effective!)

The contrarian take would be that it is the EA critique of charity evaluation which is myopic, not the charity evaluation itself - by looking at the apparent benefit for a single charity of more overhead, the EA critique ignores the broader picture of the non-profit ecosystem, and their attack undermines a key environmental protection of an important commons - further, one which the right tail of most effective charities benefit from just as much as the crowd of 'great unwashed' other causes. (Fundraising ability and efficacy look like they should be pretty orthogonal. Besides, if they correlate well enough that you'd expect the most efficacious charities would win the zero-sum fundraising game, couldn't you dispense with Givewell and give to the best fundraisers?)

The contrarian view probably goes too far. Although there's a case for communally caring about fundraising overheads, as cannibalism leads us to guess it is zero sum, parallel reasoning is hard to apply to administration overhead: charity X doesn't lose out if charity Y spends more on management, but charity Y is still penalized by common sense evaluation even if its overall efficacy increases. I'd guess that features like executive pay lie somewhere in the middle: non-profit executives could be poached by for-profit industries, so it is not as simple as donors prodding charities to coordinate to lower executive pay; but donors can prod charities not to throw away whatever 'non-profit premium' they do have in competing with one another for top talent (c.f.). If so, we should castigate people less for caring about overhead, even if we still want to encourage them to care about efficacy too.

The invisible hand of charitable pan-handling

If true, it is unclear whether the story that should be told is 'common sense was right all along and the EA movement overconfidently criticised' or 'A stopped clock is right twice a day, and the generally wrong-headed common sense had an unintended feature amongst the bugs'. I'd lean towards the latter, simply the advocates of the common sense approach have not (to my knowledge) articulated these considerations themselves.

However, many of us believe the implicit machinery of the market can turn without many of the actors within it having any explicit understanding of it. Perhaps the same applies here. If so, we should be less confident in claiming the status quo is pathological and we can do better: there may be a rationale eluding both us and its defenders.

I'm holding a birthday fundraiser

23 Kaj_Sotala 05 September 2014 12:38PM

EDIT: The fundraiser was successfully completed, raising the full $500 for worthwhile charities. Yay!

Today's my birthday! And per Peter Hurford's suggestion, I'm holding a birthday fundraiser to help raise money for MIRI, GiveDirectly, and Mercy for Animals. If you like my activity on LW or elsewhere, please consider giving a few dollars to one of these organizations via the fundraiser page. You can specify which organization you wish to donate in the comment of the donation, or just leave it unspecified, in which case I'll give your donation to MIRI.

If you don't happen to be particularly altruistically motivated, just consider it a birthday gift to me - it will give me warm fuzzies to know that I helped move money for worthy organizations. And if you are altruistically motivated but don't care about me in particular, maybe you still can get yourself to donate more than usual by hacky stuff like someone you know on the Internet having a birthday. :)

If someone else wants to hold their own birthday fundraiser, here are some tips: birthday fundraisers.

"Follow your dreams" as a case study in incorrect thinking

23 cousin_it 20 August 2014 01:18PM

This post doesn't contain any new ideas that LWers don't already know. It's more of an attempt to organize my thoughts and have a writeup for future reference.

Here's a great quote from Sam Hughes, giving some examples of good and bad advice:

"You and your gaggle of girlfriends had a saying at university," he tells her. "'Drink through it'. Breakups, hangovers, finals. I have never encountered a shorter, worse, more densely bad piece of advice." Next he goes into their bedroom for a moment. He returns with four running shoes. "You did the right thing by waiting for me. Probably the first right thing you've done in the last twenty-four hours. I subscribe, as you know, to a different mantra. So we're going to run."

The typical advice given to young people who want to succeed in highly competitive areas, like sports, writing, music, or making video games, is to "follow your dreams". I think that advice is up there with "drink through it" in terms of sheer destructive potential. If it was replaced with "don't bother following your dreams" every time it was uttered, the world might become a happier place.

The amazing thing about "follow your dreams" is that thinking about it uncovers a sort of perfect storm of biases. It's fractally wrong, like PHP, where the big picture is wrong and every small piece is also wrong in its own unique way.

The big culprit is, of course, optimism bias due to perceived control. I will succeed because I'm me, the special person at the center of my experience. That's the same bias that leads us to overestimate our chances of finishing the thesis on time, or having a successful marriage, or any number of other things. Thankfully, we have a really good debiasing technique for this particular bias, known as reference class forecasting, or inside vs outside view. What if your friend Bob was a slightly better guitar player than you? Would you bet a lot of money on Bob making it big like Jimi Hendrix? The question is laughable, but then so is betting the years of your own life, with a smaller chance of success than Bob.

That still leaves many questions unanswered, though. Why do people offer such advice in the first place, why do other people follow it, and what can be done about it?

Survivorship bias is one big reason we constantly hear successful people telling us to "follow our dreams". Successful people doesn't really know why they are successful, so they attribute it to their hard work and not giving up. The media amplifies that message, while millions of failures go unreported because they're not celebrities, even though they try just as hard. So we hear about successes disproportionately, in comparison to how often they actually happen, and that colors our expectations of our own future success. Sadly, I don't know of any good debiasing techniques for this error, other than just reminding yourself that it's an error.

When someone has invested a lot of time and effort into following their dream, it feels harder to give up due to the sunk cost fallacy. That happens even with very stupid dreams, like the dream of winning at the casino, that were obviously installed by someone else for their own profit. So when you feel convinced that you'll eventually make it big in writing or music, you can remind yourself that compulsive gamblers feel the same way, and that feeling something doesn't make it true.

Of course there are good dreams and bad dreams. Some people have dreams that don't tease them for years with empty promises, but actually start paying off in a predictable time frame. The main difference between the two kinds of dream is the difference between positive-sum games, a.k.a. productive occupations, and zero-sum games, a.k.a. popularity contests. Sebastian Marshall's post Positive Sum Games Don't Require Natural Talent makes the same point, and advises you to choose a game where you can be successful without outcompeting 99% of other players.

The really interesting question to me right now is, what sets someone on the path of investing everything in a hopeless dream? Maybe it's a small success at an early age, followed by some random encouragement from others, and then you're locked in. Is there any hope for thinking back to that moment, or set of moments, and making a little twist to put yourself on a happier path? I usually don't advise people to change their desires, but in this case it seems to be the right thing to do.

Overly convenient clusters, or: Beware sour grapes

22 KnaveOfAllTrades 02 September 2014 04:04AM

Related to: Policy Debates Should Not Appear One-Sided

There is a well-known fable which runs thus:

“Driven by hunger, a fox tried to reach some grapes hanging high on the vine but was unable to, although he leaped with all his strength. As he went away, the fox remarked 'Oh, you aren't even ripe yet! I don't need any sour grapes.' People who speak disparagingly of things that they cannot attain would do well to apply this story to themselves.”

This gives rise to the common expression ‘sour grapes’, referring to a situation in which one incorrectly claims to not care about something to save face or feel better after being unable to get it.

This seems to be related to a general phenomenon, in which motivated cognition leads one to flinch away from the prospect of an action that is inconvenient or painful in the short term by concluding that a less-painful option strictly dominates the more-painful one.

In the fox’s case, the allegedly-dominating option is believing (or professing) that he did not want the grapes. This spares him the pain of feeling impotent in face of his initial failure, or the embarrassment of others thinking him to have failed. If he can’t get the grapes anyway, then he might as well erase the fact that he ever wanted them, right? The problem is that considering this line of reasoning will make it more tempting to conclude that the option really was dominating—that he really couldn’t have gotten the grapes. But maybe he could’ve gotten the grapes with a bit more work—by getting a ladder, or making a hook, or Doing More Squats in order to Improve His Vert.

The fable of the fox and the grapes doesn’t feel like a perfect fit, though, because the fox doesn’t engage in any conscious deliberation before giving up on sour grapes; the whole thing takes place subconsciously. Here are some other examples that more closely illustrate the idea of conscious rationalization by use of overly convenient partitions:

The Seating Fallacy:

“Be who you are and say what you feel, because those who mind don't matter and those who matter don't mind.”

This advice is neither good in full generality nor bad in full generality. Clearly there are some situations where some person is worrying too much about other people judging them, or is anxious about inconveniencing others without taking their own preferences into account. But there are also clearly situations (like dealing with an unpleasant, incompetent boss) where fully exposing oneself or saying whatever comes into one’s head is not strategic and outright disastrous. Without taking into account the specifics of the situation of the recipient of the advice, it is of limited use.

It is convenient to absolve oneself of blame by writing off anybody who challenges our first impulse as someone who ‘doesn’t matter’; it means that if something goes wrong, one can avoid the painful task of analysing and modifying one’s behaviour.

In particular, we have the following corollary:

The Fundamental Fallacy of Dating:

“Be yourself and don’t hide who you are. Be up-front about what you want. If it puts your date off, then they wouldn’t have been good for you anyway, and you’ve dodged a bullet!”

In the short-term it is convenient to not have to filter or reflect on what one says (face-to-face) or writes (online dating). In the longer term, having no filter is not a smart way to approach dating. As the biases and heuristics program has shown, people are often mistaken about what they would prefer under reflection, and are often inefficient and irrational in pursuing what they want. There are complicated courtship conventions governing timelines for revealing information about oneself and negotiating preferences, that have evolved to work around these irrationalities, to the benefit of both parties. In particular, people are dynamically inconsistent, and willing to compromise a lot more later on in a courtship than they thought they would earlier on; it is often a favour to both of you to respect established boundaries regarding revealing information and getting ahead of the current stage of the relationship.

For those who have not much practised the skill of avoiding triggering Too Much Information reactions, it can feel painful and disingenuous to even try changing their behaviour, and they rationalise it via the Fundamental Fallacy. At any given moment, changing this behaviour is painful and causes a flinch reaction, even though the value of information of trying a different approach might be very high, and might cause less pain (e.g. through reduced loneliness) in the long term.

We also have:

PR rationalization and incrimination:

“There’s already enough ammunition out there if anybody wants to assassinate my character, launch a smear campaign, or perform a hatchet job. Nothing I say at this point could make it worse, so there’s no reason to censor myself.”

This is an overly convenient excuse. It does not take into account, for example, that new statements provide a new opportunity for one to come to the attention of quote miners in the first place, or that different statements might be more or less easy to seed a smear campaign; ammunition can vary in type and accessibility, so that adding more can increase the convenience of a hatchet job. It might turn out, after weighing the costs and benefits, that speaking honestly is the right decision. But one can’t know that on the strength of a convenient deontological argument that doesn’t consider those costs. Similarly:

“I’ve already pirated so much stuff I’d be screwed if I got caught. Maybe it was unwise and impulsive at first, but by now I’m past the point of no return.”

 This again fails to take into account the increased risk of one’s deeds coming to attention; if most prosecutions are caused by (even if not purely about) offences shortly before the prosecution, and you expect to pirate long into the future, then your position now is the same as when you first pirated; if it was unwise then, then it’s unwise now.

~~~~

The common fallacy in all these cases is that one looks at only the extreme possibilities, and throws out the inconvenient, ambiguous cases. This results in a disconnected space of possibilities that is engineered to allow one to prove a convenient conclusion. For example, the Seating Fallacy throws out the possibility that there are people who mind but also matter; the Fundamental Fallacy of Dating prematurely rules out people who are dynamically inconsistent or are imperfect introspectors, or who have uncertainty over preferences; PR rationalization fails to consider marginal effects and quantify risks in favour of a lossy binary approach.

What are other examples of situations where people (or Less Wrongers specifically) might fall prey to this failure mode?

Robin Hanson's "Overcoming Bias" posts as an e-book.

21 ciphergoth 31 August 2014 01:26PM

At Luke Muehlhauser's request, I wrote a script to scrape all of Robin Hanson's posts to Overcoming Bias into an e-book; here's a first beta release. Please comment here with any problems—posts in the wrong order, broken links, bad formatting, missing posts. Thanks!

 


 

Superintelligence Reading Group - Section 1: Past Developments and Present Capabilities

20 KatjaGrace 16 September 2014 01:00AM

This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, see the announcement post. For the schedule of future topics, see MIRI's reading guide.


Welcome to the Superintelligence reading group. This week we discuss the first section in the reading guide, Past developments and present capabilities. This section considers the behavior of the economy over very long time scales, and the recent history of artificial intelligence (henceforth, 'AI'). These two areas are excellent background if you want to think about large economic transitions caused by AI.

This post summarizes the section, and offers a few relevant notes, thoughts, and ideas for further investigation. My own thoughts and questions for discussion are in the comments.

There is no need to proceed in order through this post. Feel free to jump straight to the discussion. Where applicable, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).

Reading: Foreword, and Growth modes through State of the art from Chapter 1 (p1-18)


Summary

Economic growth:

  1. Economic growth has become radically faster over the course of human history. (p1-2)
  2. This growth has been uneven rather than continuous, perhaps corresponding to the farming and industrial revolutions. (p1-2)
  3. Thus history suggests large changes in the growth rate of the economy are plausible. (p2)
  4. This makes it more plausible that human-level AI will arrive and produce unprecedented levels of economic productivity.
  5. Predictions of much faster growth rates might also suggest the arrival of machine intelligence, because it is hard to imagine humans - slow as they are - sustaining such a rapidly growing economy. (p2-3)
  6. Thus economic history suggests that rapid growth caused by AI is more plausible than you might otherwise think.

The history of AI:

  1. Human-level AI has been predicted since the 1940s. (p3-4)
  2. Early predictions were often optimistic about when human-level AI would come, but rarely considered whether it would pose a risk. (p4-5)
  3. AI research has been through several cycles of relative popularity and unpopularity. (p5-11)
  4. By around the 1990s, 'Good Old-Fashioned Artificial Intelligence' (GOFAI) techniques based on symbol manipulation gave way to new methods such as artificial neural networks and genetic algorithms. These are widely considered more promising, in part because they are less brittle and can learn from experience more usefully. Researchers have also lately developed a better understanding of the underlying mathematical relationships between various modern approaches. (p5-11)
  5. AI is very good at playing board games. (12-13)
  6. AI is used in many applications today (e.g. hearing aids, route-finders, recommender systems, medical decision support systems, machine translation, face recognition, scheduling, the financial market). (p14-16)
  7. In general, tasks we thought were intellectually demanding (e.g. board games) have turned out to be easy to do with AI, while tasks which seem easy to us (e.g. identifying objects) have turned out to be hard. (p14)
  8. An 'optimality notion' is the combination of a rule for learning, and a rule for making decisions. Bostrom describes one of these: a kind of ideal Bayesian agent. This is impossible to actually make, but provides a useful measure for judging imperfect agents against. (p10-11)

Notes on a few things

  1. What is 'superintelligence'? (p22 spoiler)
    In case you are too curious about what the topic of this book is to wait until week 3, a 'superintelligence' will soon be described as 'any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest'. Vagueness in this definition will be cleared up later. 
  2. What is 'AI'?
    In particular, how does 'AI' differ from other computer software? The line is blurry, but basically AI research seeks to replicate the useful 'cognitive' functions of human brains ('cognitive' is perhaps unclear, but for instance it doesn't have to be squishy or prevent your head from imploding). Sometimes AI research tries to copy the methods used by human brains. Other times it tries to carry out the same broad functions as a human brain, perhaps better than a human brain. Russell and Norvig (p2) divide prevailing definitions of AI into four categories: 'thinking humanly', 'thinking rationally', 'acting humanly' and 'acting rationally'. For our purposes however, the distinction is probably not too important.
  3. What is 'human-level' AI? 
    We are going to talk about 'human-level' AI a lot, so it would be good to be clear on what that is. Unfortunately the term is used in various ways, and often ambiguously. So we probably can't be that clear on it, but let us at least be clear on how the term is unclear. 

    One big ambiguity is whether you are talking about a machine that can carry out tasks as well as a human at any price, or a machine that can carry out tasks as well as a human at the price of a human. These are quite different, especially in their immediate social implications.

    Other ambiguities arise in how 'levels' are measured. If AI systems were to replace almost all humans in the economy, but only because they are so much cheaper - though they often do a lower quality job - are they human level? What exactly does the AI need to be human-level at? Anything you can be paid for? Anything a human is good for? Just mental tasks? Even mental tasks like daydreaming? Which or how many humans does the AI need to be the same level as? Note that in a sense most humans have been replaced in their jobs before (almost everyone used to work in farming), so if you use that metric for human-level AI, it was reached long ago, and perhaps farm machinery is human-level AI. This is probably not what we want to point at.

    Another thing to be aware of is the diversity of mental skills. If by 'human-level' we mean a machine that is at least as good as a human at each of these skills, then in practice the first 'human-level' machine will be much better than a human on many of those skills. It may not seem 'human-level' so much as 'very super-human'.

    We could instead think of human-level as closer to 'competitive with a human' - where the machine has some super-human talents and lacks some skills humans have. This is not usually used, I think because it is hard to define in a meaningful way. There are already machines for which a company is willing to pay more than a human: in this sense a microscope might be 'super-human'. There is no reason for a machine which is equal in value to a human to have the traits we are interested in talking about here, such as agency, superior cognitive abilities or the tendency to drive humans out of work and shape the future. Thus we talk about AI which is at least as good as a human, but you should beware that the predictions made about such an entity may apply before the entity is technically 'human-level'.


    Example of how the first 'human-level' AI may surpass humans in many ways.

    Because of these ambiguities, AI researchers are sometimes hesitant to use the term. e.g. in these interviews.
  4. Growth modes (p1) 
    Robin Hanson wrote the seminal paper on this issue. Here's a figure from it, showing the step changes in growth rates. Note that both axes are logarithmic. Note also that the changes between modes don't happen overnight. According to Robin's model, we are still transitioning into the industrial era (p10 in his paper).
  5. What causes these transitions between growth modes? (p1-2)
    One might be happier making predictions about future growth mode changes if one had a unifying explanation for the previous changes. As far as I know, we have no good idea of what was so special about those two periods. There are many suggested causes of the industrial revolution, but nothing uncontroversially stands out as 'twice in history' level of special. You might think the small number of datapoints would make this puzzle too hard. Remember however that there are quite a lot of negative datapoints - you need an explanation that didn't happen at all of the other times in history. 
  6. Growth of growth
    It is also interesting to compare world economic growth to the total size of the world economy. For the last few thousand years, the economy seems to have grown faster more or less in proportion to it's size (see figure below). Extrapolating such a trend would lead to an infinite economy in finite time. In fact for the thousand years until 1950 such extrapolation would place an infinite economy in the late 20th Century! The time since 1950 has been strange apparently. 

    (Figure from here)
  7. Early AI programs mentioned in the book (p5-6)
    You can see them in action: SHRDLU, Shakey, General Problem Solver (not quite in action), ELIZA.
  8. Later AI programs mentioned in the book (p6)
    Algorithmically generated Beethoven, algorithmic generation of patentable inventionsartificial comedy (requires download).
  9. Modern AI algorithms mentioned (p7-8, 14-15) 
    Here is a neural network doing image recognition. Here is artificial evolution of jumping and of toy cars. Here is a face detection demo that can tell you your attractiveness (apparently not reliably), happiness, age, gender, and which celebrity it mistakes you for.
  10. What is maximum likelihood estimation? (p9)
    Bostrom points out that many types of artificial neural network can be viewed as classifiers that perform 'maximum likelihood estimation'. If you haven't come across this term before, the idea is to find the situation that would make your observations most probable. For instance, suppose a person writes to you and tells you that you have won a car. The situation that would have made this scenario most probable is the one where you have won a car, since in that case you are almost guaranteed to be told about it. Note that this doesn't imply that you should think you won a car, if someone tells you that. Being the target of a spam email might only give you a low probability of being told that you have won a car (a spam email may instead advise you of products, or tell you that you have won a boat), but spam emails are so much more common than actually winning cars that most of the time if you get such an email, you will not have won a car. If you would like a better intuition for maximum likelihood estimation, Wolfram Alpha has several demonstrations (requires free download).
  11. What are hill climbing algorithms like? (p9)
    The second large class of algorithms Bostrom mentions are hill climbing algorithms. The idea here is fairly straightforward, but if you would like a better basic intuition for what hill climbing looks like, Wolfram Alpha has a demonstration to play with (requires free download).

In-depth investigations

If you are particularly interested in these topics, and want to do further research, these are a few plausible directions:

  1. How have investments into AI changed over time? Here's a start, estimating the size of the field.
  2. What does progress in AI look like in more detail? What can we infer from it? I wrote about algorithmic improvement curves before. If you are interested in plausible next steps here, ask me.
  3. What do economic models tell us about the consequences of human-level AI? Here is some such thinking; Eliezer Yudkowsky has written at length about his request for more.

How to proceed

This has been a collection of notes on the chapter. The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

Next week, we will talk about what AI researchers think about human-level AI: when it will arrive, what it will be like, and what the consequences will be. To prepare, read Opinions about the future of machine intelligence from Chapter 1 and also When Will AI Be Created? by Luke Muehlhauser. The discussion will go live at 6pm Pacific time next Monday 22 September. Sign up to be notified here.

A Guide to Rational Investing

19 ColbyDavis 15 September 2014 02:36AM

Hello Less Wrong, I don't post here much but I've been involved in the Bay Area Less Wrong community for several years, where many of you know me from. The following is a white paper I wrote earlier this year for my firm, RHS Financial, a San Francisco based private wealth management practice. A few months ago I presented it at a South Bay Less Wrong meetup. Since then many of you have encouraged me to post it here for the rest of the community to see. The original can be found here, please refer to the disclosures, especially if you are the SEC. I have added an afterword here beneath the citations to address some criticisms I have encountered since writing it. As a company white paper intended for a general audience, please forgive me if the following is a little too self-promoting or spends too much time on grounds already well-tread here, but I think many of you will find it of value. Hope you enjoy!

 

 

Executive Summary: Capital markets have created enormous amounts of wealth for the world and reward disciplined, long-term investors for their contribution to the productive capacity of the economy. Most individuals would do well to invest most of their wealth in the capital market assets, particularly equities. Most investors, however, consistently make poor investment decisions as a result of a poor theoretical understanding of financial markets as well as cognitive and emotional biases, leading to inferior investment returns and inefficient allocation of capital. Using an empirically rigorous approach, a rational investor may reasonably expect to exploit inefficiencies in the market and earn excess returns in so doing.

 

 

 

 

Most people understand that they need to save money for their future, and surveys consistently find a large majority of Americans expressing a desire to save and invest more than they currently are. Yet the savings rate and percentage of people who report owning stocks has trended down in recent years,1 despite the increasing ease with which individuals can participate in financial markets, thanks to the spread of discount brokers and employer 401(k) plans. Part of the reason for this is likely the unrealistically pessimistic expectations of would-be investors. According to a recent poll barely one third of Americans consider equities to be a good way to build wealth over time.2 The verdict of history, however, is against the skeptics.


The Greatest Deal of all Time


Equity ownership is probably the easiest, most powerful means of accumulating wealth over time, and people regularly forego millions of dollars over the course of their lifetimes letting their wealth sit in cash. Since its inception in 1926, the annualized total return on the S&P 500 has been 9.8% as of the end of 2012.3 $1 invested back then would be worth $3,533 by the end of the period. More saliently, a 25 year old investor investing $5,000 per year at that rate would have about $2.1 million upon retirement at 65.


The strong performance of stock markets is robust to different times and places. Though the most accurate data on the US stock market goes back to 1926, financial historians have gathered information going back to 1802 and find the average annualized real return in earlier periods is remarkably close to the more recent official records. Looking at rolling 30 year returns between 1802 and 2006, the lowest and highest annualized real returns have been 2.6% and 10.6%, respectively.4 The United States is not unique in its experience, either. In a massive study of the sixteen countries that had data on local stock, bond, and cash returns available for every year of the twentieth century, the stock market in every one had significant, positive real returns that exceeded those of cash and fixed income alternatives.5 The historical returns of US stocks only slightly exceed those of the global average.


The opportunity cost of not holding stocks is enormous. Historically the interest earned on cash equivalent investments like savings accounts has barely kept up with inflation - over the same since-1926 period inflation has averaged 3.0% while the return on 30-day treasury bills (a good proxy for bank savings rates) has been 3.5%.6 That 3.5% rate would only earn an investor $422k over the same $5k/year scenario above. The situation today is even worse. Most banks are currently paying about 0.05% on savings.


Similarly, investment grade bonds, such as those issued by the US Treasury and highly rated corporations, though often an important component of a diversified portfolio, have offered returns only modestly better than cash over the long run. The average return on 10-year treasury bonds has been 5.1%,7 earning an investor $619k over the same 40 year scenario. The yield on the 10-year treasury is currently about 3%.


Homeownership has long been a part of the American dream, and many have been taught that building equity in your home is the safest and most prudent way to save for the future. The fact of the matter, however, is that residential housing is more of a consumption good than an investment. Over the last century the value of houses have barely kept up with inflation,8 and as the recent mortgage crisis demonstrated, home prices can crash just as any other market.


In virtually every time and place we look, equities are the best performing asset available, a fact which is consistent with the economic theory that risky assets must offer a premium to their investors to compensate them for the additional uncertainty they bear. What has puzzled economists for decades is why the so-called equity risk premium is so large and why so many individuals invest so little in stocks.9


Your Own Worst Enemy


Recent insights from multidisciplinary approaches in cognitive science have shed light on the issue, demonstrating that instead of rationally optimizing between various trade-offs, human beings regularly rely on heuristics - mental shortcuts that require little cognitive effort - when making decisions.10 These heuristics lead to taking biased approaches to problems that deviate from optimal decision making in systematic and predictable ways. Such biases affect financial decisions in a large number of ways, one of the most profound and pervasive being the tendency of myopic loss aversion.


Myopic loss aversion refers to the combined result of two observed regularities in the way people think: that losses feel bad to a greater extent than equivalent gains feel good, and that people rely too heavily (anchor) on recent and readily available information. 11Taken together, it is easy to see how these mental errors could bias an individual against holding stocks. Though the historical and expected return on equities greatly exceeds those of bonds and cash, over short time horizons they can suffer significant losses. And while the loss of one’s home equity is generally a nebulous abstraction that may not manifest itself consciously for years, stock market losses are highly visible, drawing attention to themselves in brokerage statements and newspaper headlines. Not surprisingly, then, an all too common pattern among investors is to start investing at a time when the headlines are replete with stories of the riches being made in markets, only to suffer a pullback and quickly sell out at ten, twenty, thirty plus percent losses and sit on cash for years until the next bull market is again near its peak in a vicious circle of capital destruction. Indeed, in the 20 year period ending 2012, the S&P 500 returned 8.2% and investment grade bonds returned 6.3% annualized. The inflation rate was 2.5%, and the average retail investor earned an annualized rate of 2.3%.12


Even when investors can overcome their myopic loss aversion and stay in the stock market for the long haul, investment success is far from assured. The methods by which investors choose which stocks or stock managers to buy, hold, and sell are also subject to a host of biases which consistently lead to suboptimal investing and performance. Chief among these is overconfidence, the belief that one’s judgements and skills are reliably superior.


Overconfidence is endemic to the human experience. The vast majority of people think of themselves as more intelligent, attractive, and competent than most of their peers,13 even in the face of proof to the contrary. 93% of people consider themselves to be above-average drivers,14 for example, and that percentage decreases only slightly if you ask people to evaluate their driving skill after being admitted to a hospital following a traffic accident.15 Similarly, most investors are confident they can consistently beat the market. One survey found 74% of mutual fund investors believed the funds they held would “consistently beat the S&P 500 every year” in spite of the statistical reality that more than half of US stock funds underperform in a given year and virtually none will outperform it each and every year. Many investors will even report having beaten the index despite having verifiably underperformed it by several percentage points.16


Overconfidence leads investors to take outsized bets on what they know and are familiar with. Investors around the world commonly hold 80% or more of their portfolios in investments from their own country,17 and one third of 401(k) assets are invested in participants’ own employer’s stock.18 Such concentrated portfolios are demonstrably riskier than a broadly diversified portfolio, yet investors regularly evaluate their investments as less risky than the general market, even if their securities had recently lost significantly more than the overall market.


If an investor believes himself to possess superior talent in selecting investments, he is likely to trade more as a result in an attempt to capitalize on each new opportunity that presents itself. In this endeavor, the harder investors try, the worse they do. In one major study, the quintile of investors who traded the most over a five year period earned an average annualized 7.1 percentage points less than the quintile that traded the least.19


The Folly of Wall Street


Relying on experts does little to help. Wall Street employs an army of analysts to follow the every move of all the major companies traded on the market, predicting their earnings and their expected performance relative to peers, but on the whole they are about as effective as a strategy of throwing darts. Burton Malkiel explains in his book A Random Walk Down Wall Street how he tracked the one and five year earnings forecasts on companies in the S&P 500 from analysts at 19 Wall Street firms and found that in aggregate the estimates had no more predictive power than if you had just assumed a given company’s earnings would grow at the same rate as the long-term average rate of growth in the economy. This is consistent with a much broader body of literature demonstrating that the predictions of statistical prediction rules - formulas that make predictions based on simple statistical rules - reliably outperform those of human experts. Statistical prediction rules have been used to predict the auction price of bordeaux better than expert wine tasters,20 marital happiness better than marriage counselors,21 academic performance better than admissions officers,22 criminal recidivism better than criminologists,23 and bankruptcy better than loan officers,24 to name just a few examples. This is an incredible finding that’s difficult to overstate. When considering complex issues such as these our natural intuition is to trust experts who can carefully weigh all the relevant information in determining the best course of action. But in reality experts are simply humans who have had more time to reinforce their preconceived notions on a particular topic and are more likely to anchor their attention on items that only introduce statistical noise.


Back in the world of finance, It turns out that to a first approximation the best estimate on the return to expect from a given stock is the long-run historical average of the stock market, and the best estimate of the return to expect from a stock picking mutual fund is the long-run historical average of the stock market minus its fees. The active stock pickers who manage mutual funds have on the whole demonstrated little ability to outperform the market. To be sure, at any given time there are plenty of managers who have recently beaten the market smartly, and if you look around you will even find a few with records that have been terrific over ten years or more. But just as a coin-flipping contest between thousands of contestants would no doubt yield a few who had uncannily “called it” a dozen or more times in a row, the number of market beating mutual fund managers is no greater than what you should expect as a result of pure luck.25


Expert and amatuer investors alike underestimate how competitive the capital markets are. News is readily available and quickly acted upon, and any fact you know about that you think gives you an edge is probably already a value in the cells of thousands of spreadsheets of analysts trading billions of dollars. Professor of Finance at Yale and Nobel Laureate Robert Shiller makes this point in a lecture using an example of a hypothetical drug company that announces it has received FDA approval to market a new drug:


Suppose you then, the next day, read in The Wall Street Journal about this new announcement. Do you think you have any chance of beating the market by trading on it? I mean, you're like twenty-four hours late, but I hear people tell me — I hear, "I read in Business Week that there was a new announcement, so I'm thinking of buying." I say, "Well, Business Week — that information is probably a week old." Even other people will talk about trading on information that's years old, so you kind of think that maybe these people are naïve. First of all, you're not a drug company expert or whatever it is that's needed. Secondly, you don't know the math — you don't know how to calculate present values, probably. Thirdly, you're a month late. You get the impression that a lot of people shouldn't be trying to beat the market. You might say, to a first approximation, the market has it all right so don't even try.26


In that last sentence Shiller hints at one of the most profound and powerful ideas in finance: the efficient market hypothesis. The core of the efficient market hypothesis is that when news that impacts the value of a company is released, stock prices will adjust instantly to account for the new information and bring it back to equilibrium where it’s no longer a “good” or “bad” investment but simply a fair one for its risk level. Because news is unpredictable by definition, it is impossible then to reliably outperform the market as a whole, and the seemingly ingenious investors on the latest cover of Forbes or Fortune are simply lucky.


A Noble Lie


In the 50s, 60s, and 70s several economists who would go on to win Nobel prizes worked out the implications of the efficient market hypothesis and created a new intellectual framework known as modern portfolio theory.27 The upshot is that capital markets reward investors for taking risk, and the more risk you take, the higher your return should be (in expectation, it might not turn out to be the case, which is why it’s risky). But the market doesn’t reward unnecessary risk, such as taking out a second mortgage to invest in your friend’s hot dog stand. It only rewards systematic risk, the risks associated with being exposed to the vagaries of the entire economy, such as interest rates, inflation, and productivity growth.28 Stock of small companies are riskier and have a higher expected return than stocks of large companies, which are riskier than corporate bonds, which are riskier than Treasury bonds. But owning one small cap stock doesn’t offer a higher expected return than another small cap stock, or a portfolio of hundreds of small caps for that matter. Owning more of a particular stock merely exposes you to the idiosyncratic risks that particular company faces and for which you are not compensated. Diversifying assets across as many securities as possible, it is possible to reduce the volatility of your portfolio without lowering its expected return.


This approach to investing dictates that you should determine an acceptable level of risk for your portfolio, then buy the largest basket of securities possible that targets that risk, ideally while paying the least amount possible in fees. Academic activism in favor of this passive approach gained momentum through the 70s, culminating in the launch of the first commercially available index fund in 1976, offered by The Vanguard Group. The typical index fund seeks to replicate the overall market performance of a broad class of investments such as large US stocks by owning all the securities in that market in proportion to their market weights. Thus if XYZ stock makes up 2% of the value of the relevant asset class, the index fund will allocate 2% of its funds to that stock. Because index funds only seek to replicate the market instead of beating it, they save costs on research and management teams and pass the savings along to investors through lower fees.


Index funds were originally derided and attracted little investment, but years of passionate advocacy by popularizers such as Jack Bogle and Burton Malkiel as well as the consensus of the economics profession has helped to lift them into the mainstream. Index funds now command trillions of dollars of assets and cover every segment of the market in stocks, bonds, and alternative assets in the US and abroad. In 2003 Vanguard launched its target retirement funds, which took the logic of passive investing even further by providing a single fund that would automatically shift from more aggressive to more conservative index investments as its investors approached retirement. Target retirement funds have since become especially popular options in 401(k) plans.


The rise of index investing has been a boon to individual investors, who have clearly benefited from the lower fees and greater diversification they offer. To the extent that investors have bought into the idea of passive investing over market timing and active security selection they have collectively saved themselves a fortune by not giving in to their value-destroying biases. For all the good index funds have done though, since their birth in the 70s, the intellectual foundation upon which they stand, the efficient market hypothesis, has been all but disproved.


The EMH is now the noble lie of the economics profession; while economists usually teach their students and the public that the capital markets are efficient and unbeatable, their research over the last few decades has shown otherwise. In a telling example, Paul Samuelson, who helped originate the EMH and advocated it in his best selling textbook, was a large, early investor in Berkshire Hathaway, Warren Buffett’s active investment holding company.29 But real people regularly ruin their lives through sloppy investing, and for them perhaps it is better just to say that beating the market can’t be done, so just buy, hold, and forget about it. We, on the other hand, believe a more nuanced understanding of the facts can be helpful.


Premium Investing


Shortly after the efficient market hypothesis was first put forth researchers realized the idea had serious theoretical shortcomings.30 Beginning as early as 1977 they also found empirical “anomalies,” factors other than systematic risk that seemed to predict returns.31 Most of the early findings focused on valuation ratios - measures of a firm’s market price in relation to an accounting measure such as book value or earnings - and found that “cheap” stocks on average outperformed “expensive” stocks, confirming the value investment philosophy first promulgated by the legendary Depression-era investor Benjamin Graham and popularized by his most famous student, Warren Buffett. In 1992 Eugene Fama, one of the fathers of the efficient market hypothesis, published, along with Ken French, a groundbreaking paper demonstrating that the cheapest decile stocks in the US, as measured by the price to book ratio, outperformed the highest decile stocks by an astounding 11.9% per year, despite there being little difference in risk between them.32


A year later, researchers found convincing evidence of a momentum anomaly in US stocks: stocks that had the highest performance over the last 3-12 months continued to outperform relative to those with the lowest performance. The effect size was comparable to that of the value anomaly and again the discrepancy could not be explained with any conventional measure of risk.33


Since then, researchers have replicated the value and momentum effects across larger and deeper datasets, finding comparably large effect sizes in different times, regions, and asset classes. In a highly ambitious 2012 paper, Clifford Asness (a former student of Fama’s) and Tobias Moskowitz documented the significance of value and momentum across 18 national equity markets, 10 currencies, 10 government bonds, and 27 commodity futures.


Though value and momentum are the most pervasive and best documented of the market anomalies, many others have been discovered across the capital markets. Others include the small-cap premium34 (small company stocks tend to outperform large company stocks even in excess of what should be expected by their risk), the liquidity premium35 (less frequently traded securities tend to outperform more frequently traded securities), short-term reversal36 (equities with the lowest one-week to one-month performance tend to outperform over short time horizons), carry37 (high-yielding currencies tend to appreciate against low-yielding currencies), roll yield38,39 (bonds and futures at steeply negatively sloped points along the yield curve tend to outperform those at flatter or positively sloped points), profitability40 (equities of firms with higher proportions of profits over assets or equity tend to outperform those with lower profitability), calendar effects41 (stocks tend to have stronger returns in January and weaker returns on Mondays), and corporate action premia42 (securities of corporations that will, currently are, or have recently engaged in mergers, acquisitions, spin-offs, and other events tend to consistently under or outperform relative to what would be expected by their risk).


Most of these market anomalies appear remarkably robust compared to findings in other social sciences,43 especially considering that they seem to imply trillions of dollars of easy money is being overlooked in plain sight. Intelligent observers often question how such inefficiencies could possibly persist in the face of such strong incentives to exploit them until they disappear. Several explanations have been put forth, some of which are conflicting but which all probably have some explanatory power.


The first interpretation of the anomalies is to deny that they are actually anomalous, but rather are compensation for risk that isn’t captured by the standard asset pricing models. This is the view of Eugene Fama, who first postulated that the value premium was compensation for assuming risk of financial distress and bankruptcy that was not fully captured by simply measuring the standard deviation of a value stock’s returns.44 Subsequent research, however, disproved that the value effect was explained by exposure to financial distress.45 More sophisticated arguments point to the fact that the excess returns of value, momentum, and many other premiums exhibit greater skewness, kurtosis, or other statistical moments than the broad market: subtle statistical indications of greater risk, but the differences hardly seem large enough to justify the large return premiums observed.46


The only sense in which e.g. value and momentum stocks seem genuinely “riskier” is in career risk; though the factor premiums are significant and robust in the long term, they are not consistent or predictable along short time horizons. Reaping their rewards requires patience, and an analyst or portfolio manager who recommends an investment for his clients based on these factors may end up waiting years before it pays off, typically more than enough time to be fired.47 Though any investment strategy is bound to underperform at times, strategies that seek to exploit the factors most predictive of excess returns are especially susceptible to reputational hazard. Value stocks tend to be from unpopular companies in boring, slow growth industries. Momentum stocks are often from unproven companies with uncertain prospects or are from fallen angels who have only recently experienced a turn of luck. Conversely, stocks that score low on value and momentum factors are typically reputable companies with popular products that are growing rapidly and forging new industry standards in their wake.


Consider then, two companies in the same industry: Ol’Timer Industries, which has been around for decades and is consistently profitable but whose product lines are increasingly considered uncool and outdated. Recent attempts to revamp the company’s image by the firm’s new CEO have had modest success but consumers and industry experts expect this to be just delaying further inevitable loss of market share to NuTime.ly, founded eight years ago and posting exponential revenue growth and rapid adoption by the coveted 18-35 year old demographic, who typically describe its products using a wide selection of contemporary idioms and slang indicating superior social status and functionality. Ol’Timer Industries’ stock will likely score highly on value on momentum factors relative to NuTime.ly and so have a higher expected return. But consider the incentives of the investment professional choosing between the two: if he chooses Ol’Timer and it outperforms he may be congratulated and rewarded perhaps slightly more than if he had chosen NuTime.ly and it outperforms, but if he chooses Ol’Timer and it underperforms he is a fool and a laughingstock who wasted clients’ money on his pet theory when “everyone knew” NuTime.ly was going to win. At least if he chooses NuTime.ly and it underperforms it was a fluke that none of his peers saw coming, save for a few wingnuts who keep yammering about the arcane theories of Gene Fama and Benjamin Graham.


For most investors, “it is better for reputation to fail conventionally than to succeed unconventionally” as John Maynard Keynes observed in his General Theory. Not that this is at all restricted to investors, professional or amateur. In a similar vein, professional soccer goalkeepers continue to jump left or right on penalty kicks when statistics show they’d block more shots standing still.48 But standing in place while the ball soars into the upper right corner makes the goalkeeper look incompetent. The proclivity of middle managers and bureaucrats to default to uncontroversial decisions formed by groupthink is familiar enough to be the stuff of popular culture; nobody ever got fired for buying IBM, as the saying goes. Psychological experiments have shown that people will often affirm an obviously false observation about simple facts such as the relative lengths of straight lines on a board if others have affirmed it before them.49


We find ourselves back to the nature of human thinking and the biases and other cognitive errors that afflict it. This is what most interpretations of the market anomalies focuses on. Both amatuer and professional investors are human beings that are apt to make investment decisions not through a methodical application of modern portfolio theory but based rather on stories, anecdotes, hunches, and ideologies. Most of the anomalies make sense in light of an understanding of some of the most common biases such as anchoring and availability bias, status quo bias, and herd behavior.50 Rational investors seeking to exploit these inefficiencies may be able to do so to a limited extent, but if they are using other peoples’ money then they are constrained by the biases of their clients. The more aggressively they attempt to exploit market inefficiencies, the more they risk badly underperforming the market long enough to suffer devastating withdrawals of capital.51


It is no surprise then, that the most successful investors have found ways to rely on “sticky” capital unlikely to slip out of their control at the worst time. Warren Buffett invests the float of his insurance company holdings, which behaves in actuarially predictable ways; David Swensen manages the Yale endowment fund, which has an explicitly indefinite time horizon and a rules based spending rate; Renaissance Technologies, arguably the most successful hedge fund ever, only invests its own money; Dimensional Fund Advisors, one of the only mutual fund companies that has consistently earned excess returns through factor premiums, only sells through independent financial advisors who undergo a due diligence process to ensure they share similar investment philosophies.


Building a Better Portfolio


So what is an investor to do? The prospect of delicately crafting a portfolio that’s adequately diversified while taking advantage of return premiums may seem daunting, and one may be tempted to simply buy a Vanguard target retirement fund appropriate for their age and be done with it. Doing so is certainly a reasonable option. But we believe that with a disciplined investment strategy informed by the findings discussed above superior results are possible.


The first place to start is an assessment of your risk tolerance. How far can your portfolio fall before it adversely affects your quality of life? For investors saving for retirement with many more years of work ahead of them, the answer will likely be “quite a lot.” With ten years or more to work with, your portfolio will likely recover from even the most extreme bear markets. But people do not naturally think in ten-year increments, and many must live off their portfolio principal; accept that in the short term your portfolio will sometimes be in the red and consider what percentage decline over a period of a few months to a year you are comfortable enduring. Over a one year period the “worst case scenario” on diversified stock portfolios is historically about a 40% decline. For a traditional “moderate” portfolio of 60% stocks, 40% bonds it has been about a 25% decline.52


With a target on how much risk to accept in your portfolio, modern portfolio theory shows us a technique for achieving the most efficient tradeoff between risk and return possible called mean-variance optimization. An adequate treatment of MVO is beyond the scope of this paper,53 but essentially the task is to forecast expected returns on the major asset classes (e.g. US Stocks, International Stocks, and Investment Grade Bonds) then compute the weights for each that will achieve the highest expected return for a given amount of risk. We use an approach to mean variance optimization known as the Black-Litterman model54 and estimate expected returns using a limited number of simple inputs; for example, the expected return on an index of stocks can be closely approximated using the current dividend yield plus the long run growth rate of the economy.55


With optimal portfolio weights determined, next the investor must select the investment vehicles to use to gain exposure to the various asset classes. Though traditional index funds are a reasonable option, in recent years several “enhanced index” mutual fund and ETFs have been released that provide inexpensive, broad exposure to the hundreds or thousands of securities in a given asset classes while enhancing exposure to one or more of the major factor premiums discussed above such as value, profitability, or momentum. Research Affiliates, for example, licences a “fundamental index” that has been shown to provide efficient exposure to value and small-cap stocks across many markets.56 These “RAFI” indexes have been licensed to the asset management firms Charles Schwab and PowerShares to be made available through mutual funds and ETFs to the general investing public, and have generally outperformed their traditional index fund counterparts since inception.


Over the course of time, portfolio allocations will drift from their optimized allocations as particular asset classes inevitably outperform relative to other ones. Leaving this unchecked can lead to a portfolio that is no longer risk-return efficient. The investor must periodically rebalance the portfolio by selling securities that have become overweight and buying others that are underweight. Research suggests that by setting “tolerance bands” around target asset allocations, monitoring the portfolio frequently and trading when weights drift outside tolerance, investors can take further advantage of inter-asset-class value and momentum effects and boost return while reducing risk.57


Most investors, however, do not rebalance systematically, perhaps in part because it can be psychologically distressing. Rebalancing necessarily entails regularly selling assets that have been performing well in order to buy ones that have been laggards, exactly when your cognitive biases are most likely to tell you that it’s a bad idea. Indeed, neuroscientists have observed in laboratory experiments that when individuals consider the prospect of buying more of a risky asset that has lost them money, it activates the modules in the brain associated with anticipation of physical pain and anxiety.58 Dealing with investment losses is literally painful for investors.


Many investors may find it helpful to their peace of mind as well as their portfolio to outsource the entire process to a party with less emotional attachment in their portfolio. Realistically, most investors have neither the time nor the motivation necessary to attain a firm understanding of modern portfolio theory, research the capital market expectations on various asset classes and securities, and regularly monitor and rebalance their portfolio, all with enough rigor to make it worth the effort compared to a simple indexing strategy. By utilizing the skills of a good financial advisor, however, an investor can leverage the expertise of a professional with the bandwidth to execute these tactics in a cost-efficient manner.


A financial advisor should be able to engage you as an investor and acquire a firm understanding of your goals, needs, and attitudes towards risk, money, and markets. Because he or she will have an entire practice over which to efficiently dedicate time and resources on portfolio research, optimization, and trading, the financial advisor should be able to craft a portfolio that’s optimized for your personal situation. Financial advisors, as institutional investors, generally have access to institutional class funds that retail investors do not, including many of those that have demonstrated the greatest dedication to exploiting the factor premiums. Notably, DFA and AQR, the two fund families with the greatest academic support, are generally only available to individual investors through a financial advisor. Should your professionally managed portfolio provide a better risk adjusted return than a comparable do-it-yourself index fund approach, the FA’s fees have paid for themselves.


Furthermore, a good financial advisor will make sure your investments are tax efficient and that you are making the most of tax-preferred accounts. Researchers have shown that after asset allocation, asset location, the strategic placement of investments in accounts with different tax treatment, is one of the most important factors in net portfolio returns,59 yet most individual investors largely ignore these effects.60 Advisor’s fees can generally be paid with pre-tax funds as well, further enhancing tax efficiency.


Invest with Purpose


There is something of a paradox involved in investing. Finance is a highly specialized and technical field, but money is a very personal and emotional topic. Achieving the joy and fulfillment associated with financial success requires a large measure of emotional detachment and impersonal pragmatism. Far too often people suffer great loss by confusing loyalties and aspirations, fears and regrets with the efficient allocation of their portfolio assets. We as advisors hate to see this happen; there is nothing to celebrate about the needless destruction of capital, it is truly a loss for us all. One of the greatest misconceptions about finance is that investing is just a zero-sum game, that one trader’s gain is another’s loss. Nothing could be further from the truth. Economists have shown that one of the greatest predictors of a nation’s well being is its financial development.61 The more liquid and active our capital markets, the greater our society’s capacity for innovation and progress. When you invest in the stock market, you are contributing your share to the productive capacity of our world, your return is your reward for helping make it better, outperformance is a sign that you have steered capital to those with the greatest use for it.

 

With the right accounts and investments in place and a process for managing them effectively, you the investor are freed to focus on what you are working and investing for, and an advisor can work with you to help get you there. Whether you want to travel the world, buy the house of your dreams, send your children to the best college, maximize your philanthropic giving, or simply retire early, an advisor can help you develop a financial plan to turn the dollars and cents of your portfolio into the life you want to live, building more health, wealth, and happiness for you, your loved ones, and the world.

 

Notes

 

1. “U.S. Stock Ownership Stays at Record Low,” Gallup.

2. U.S. Investors Not Sold on Stock Market as Wealth Creator,” Gallup.

3. Data provided by Morningstar.

4. Siegel, Stocks for the Long Run, 5-25

5. Dimson et al, Triumph of the Optimists.

6. Ibid. 3

7. Ibid

8. Shiller, “Understanding Recent Trends in House Prices and Home Ownership.”

9. Mankiw and Zeldes, for example, find that to justify the historical equity risk premium observed, investors would in aggregate need to be indifferent between a certain payoff of $51,209 and a 50/50 bet paying either $50,000 or $100,000. Mankiw and Zeldes, “The consumption of stockholders and nonstockholders,” 8.

10. For a highly readable introduction to the idea of cognitive biases, see Daniel Kahneman’s book “Thinking: Fast and Slow.” Kahneman has been a pioneer in the field and for his work won the 2002 Nobel prize in economics.

11. Benartzi and Thaler, “Myopic Loss Aversion and the Equity Premium Puzzle.”

12. “Guide to the Markets,” J.P. Morgan Asset Management

13. See, for example, Kruger and Dunning,  "Unskilled and Unaware of It: How Difficulties in Recognizing One's Own Incompetence Lead to Inflated Self-Assessments" and Zuckerman and Jost,  "What Makes You Think You're So Popular? Self Evaluation Maintenance and the Subjective Side of the ‘Friendship Paradox’"

14. Svenson, “Are We All Less Risky and More Skillful than Our Fellow Drivers?”

15. Preston and Harris, “Psychology of Drivers in Traffic Accidents.”

16. Zweig, Your Money and Your Brain. 88-91.

17. French and Poterba, “Investor Diversification and International Equity Markets.”

18. Ibid. 14. p. 98-99.

19. Barber and Odean, “Trading is Hazardous to Your Wealth: The Common Stock Investment Performance of Individual Investors.”

20. Ashenfelter et al, “Predicting the Quality and Prices of Bordeaux Wine.”

21. Thornton, "Toward a Linear Prediction of Marital Happiness."

22. Swets et al, "Psychological Science Can Improve Diagnostic Decisions."

23. Carroll et al, "Evaluation, Diagnosis, and Prediction in Parole Decision-Making."

24. Stillwell et al, "Evaluating Credit Applications: A Validation of Multiattribute Utility Weight Elicitation Techniques"

25. See Fama and French, “Luck versus Skill in the Cross-Section of Mutual Fund Returns.” They do find modest evidence of skill at the right tail end of the distribution under the capital asset pricing model. After controlling for the value, size, and momentum factor premiums (discussed below), however, evidence of net-of-fee skill is not significantly different than zero.

26. Shiller, “Efficient Markets vs. Excess Volatility.”

27. Professor Goetzmann of the Yale School of Management has a introductory hyper-text textbook on modern portfolio theory available on his website, “An Introduction to Investment Theory.”

28. In the language of modern portfolio theory this risk is known at a security’s beta. Mathematically it is the covariance of the security’s returns with the market’s returns, divided by the variance of the market’s returns.

29. Setton, “The Berkshire Bunch.”

30. For example, Grossman and Stiglitz prove in “On the Impossibility of Informationally Efficient Markets” that market efficiency cannot be an equilibrium because without excess returns there is no incentive for arbitrageurs to correct mispricings. More recently, Markowitz, one of fathers of modern portfolio theory, showed in “Market Efficiency: A Theoretical Distinction and So What” that if a couple key assumptions of MPT are relaxed, the market portfolio is no longer optimal for most investors.

31. Basu, “Investment Performance of Common Stocks in Relation to their Price-Earnings Ratios: A Test of the Efficient Market Hypothesis.”

32. Fama and French, “The Cross-Section of Expected Stock Returns.”

33. Jegadeesh and Titman, “Returns to Buying Winners and Selling Losers: Implications for Stock Market Efficiency”

34. Ibid. 31.

35. Pastor and Stambaugh, “Liquidity Risk and Expected Stock Returns.”

36. Jegadeesh, “Evidence of Predictable Behavior or Security Returns.”

37. Froot and Thaler, “Anomalies: Foreign Exchange.”

38. Campbell and Shiller, “Yield Spreads and Interest Rate Movements: A Bird’s Eye View.”

39. Erb and Harvey, “The Tactical and Strategic Value of Commodity Futures.”

40. Novy-Marx, “The Other Side of Value: The Gross Profitability Premium.”

41. Thaler, “Seasonal Movements in Security Prices.”

42. Mitchell and Pulvino, “Characteristics of Risk and Return in Risk Arbitrage.”

43. See McLean and Pontiff, “Does Academic Research Destroy Stock Return Predictability?” A meta analysis of 82 equity return factors was able to replicate 72 using out of sample data.

44. Fama and French, “Size and Book-to-Market Factors in Earnings and Returns.”

45. Daniel and Titman, “Evidence on the Characteristics of Cross Sectional Variation in Stock Returns.”

46. Hwang and Rubesam, “Is Value Really Riskier than Growth?”

47. Numerous investor profiles have expounded on the difficulty of being a rational investor in an irrational market. In a recent article in Institutional Investor, Asness and Liew give a highly readable overview of the risk vs. mispricing debate and discuss the problems they encountered launching a value-oriented hedge fund in the middle of the dot-com bubble.

48. Bar-Eli, “Action Bias Among Elite Soccer Goalkeepers: The Case of Penalty Kicks. Journal of Economic Psychology.”

49. Asch, “Opinions and Social Pressure.”

50. Daniel et al provides one of the most thorough theoretical discussions on how certain common cognitive biases can result in systematically biased security prices in “Investor Psychology and Security Market Under- and Overreaction.”

51. Schleifer and Vishny, “The Limits of Arbitrage.”

52. Data provided by Vanguard.

53. Chapter 2 of Goetzmann’s “An Introduction to Investment Theory” provides an introductory discussion.

54. The Black-Litterman model allows investors to combine their estimates of expected returns with equilibrium implied returns in a Bayesian framework that largely overcomes the input-sensitivity problems associated with traditional mean-variance optimization. Idzorek offers a thorough introduction in “A Step-By-Step Guide to the Black-Litterman Model.”

55. Ilmanen’s “Expected Returns on Major Asset Classes” provides a detailed explanation of the theory and evidence of forecasting expected returns.

56. Walkshausl and Lobe, “Fundamental Indexing Around the World.”

57. Buetow et al, “The Benefits of Rebalancing.”

58. Kuhnen and Knutson, “The Neural Basis of Financial Risk Taking.”

59. Dammon et al, “Optimal Asset Location and Allocation with Taxable and Tax-Deferred Investing.”

60. Bodie and Crane, “Personal Investing: Advice, Theory, and Evidence from a Survey of TIAA-CREF Participants.”

61. Yongseok Shin of the Federal Reserve provides a brief review of the literature on this research in “Financial Markets: An Engine for Economic Growth.”

 

 

Works Cited

 

 

Asch, Solomon E. "Opinions and Social Pressure." Scientific American 193, no. 5 (12 1955).

Ashenfelter, Orley. "Predicting the Quality and Prices of Bordeaux Wine*." The Economic Journal 118, no. 529 (12 2008).

Asness, Clifford and Liew, John. “The Great Divide over Market Efficiency.” Institutional Investor, March 3, 2014.

Asness, Clifford, Moskowitz, Tobias, and Pedersen, Lasse. “Value and Momentum Everywhere.” The Journal of Finance 68, no. 3 (6, 2013).

Bar-Eli, Michael, Ofer H. Azar, Ilana Ritov, Yael Keidar-Levin, and Galit Schein. "Action Bias among Elite Soccer Goalkeepers: The Case of Penalty Kicks." Journal of Economic Psychology 28, no. 5 (12 2007).

Barber, Brad M., and Terrance Odean. "Trading Is Hazardous to Your Wealth: The Common Stock Investment Performance of Individual Investors." The Journal of Finance 55, no. 2 (12 2000).

Basu, S. "Investment Performance of Common Stocks in Relation to Their Price-Earnings Ratios: A Test of the Efficient Market Hypothesis."The Journal of Finance 32, no. 3 (12 1977).

Benartzi, S., and R. H. Thaler. "Myopic Loss Aversion and the Equity Premium Puzzle." The Quarterly Journal of Economics110, no. 1 (12, 1995).

Bodie, Zvi, and Dwight B. Crane. "Personal Investing: Advice, Theory, and Evidence." Financial Analysts Journal 53, no. 6 (12 1997).

Buetow, Gerald W., Ronald Sellers, Donald Trotter, Elaine Hunt, and Willie A. Whipple. "The Benefits of Rebalancing." The Journal of Portfolio Management 28, no. 2 (12 2002).

Campbell, John and Shiller, Robert. “Yield Spreads and Interest Rate Movements: A Bird’s Eye View.” The Econometrics of Financial Markets, 58 no. 3 (1991).

Carroll, John S., Richard L. Wiener, Dan Coates, Jolene Galegher, and James J. Alibrio. "Evaluation, Diagnosis, and Prediction in Parole Decision Making." Law & Society Review 17, no. 1 (12 1982).

Dammon, Robert M., Chester S. Spatt, and Harold H. Zhang. "Optimal Asset Location and Allocation with Taxable and Tax-Deferred Investing." The Journal of Finance 59, no. 3 (12 2004).

Daniel, Kent, and Sheridan Titman. "Evidence on the Characteristics of Cross Sectional Variation in Stock Returns." The Journal of Finance52, no. 1 (12 1997).

Daniel, Kent, Hirshleifer, David, and Subrahmanyam, Avanidhar. “Investor Psychology and Security Market Under- and Overreactions.” The Journal of Finance, 53 no. 6 (1998).

Dimson, Elroy, Marsh, Paul, and Staunton, Mike. Triumph of the Optimists. Princeton: Princeton University Press, 2002.

Erb, Cfa Claude B., and Campbell R. Harvey. "The Strategic and Tactical Value of Commodity Futures." CFA Digest 36, no. 3 (12 2006).

Fama, Eugene F., and Kenneth R. French. "The Cross-Section of Expected Stock Returns." The Journal of Finance 47, no. 2 (12 1992).

Fama, Eugene F., and Kenneth R. French. "Luck versus Skill in the Cross-Section of Mutual Fund Returns." The Journal of Finance65, no. 5 (12 2010).

Fama, Eugene F., and Kenneth R. French. "Size and Book-to-Market Factors in Earnings and Returns."The Journal of Finance 50, no. 1 (12 1995).

French, Kenneth and Poterba, James. “Investor Diversification and International Equity Markets.” American Economic Review (1991).

Froot, Kenneth A., and Richard H. Thaler. "Anomalies: Foreign Exchange." Journal of Economic Perspectives 4, no. 3 (12 1990).

“Guide to the Markets.” J.P. Morgan Asset Management. 2014

Goetzmann, William. An Introduction to Investment Theory. Yale School of Management. Accessed April 09, 2014. http://viking.som.yale.edu/will/finman540/classnotes/notes.html

Grossman, Sanford and Stiglitz, Joseph. “On the Impossibility of Informationally Efficent Markets.” The American Economic Review 70, no. 3 (6, 1980).

Hwang, Soosung and Rubesam, Alexandre. “Is Value Really Riskier Than Growth? An Answer with Time-Varying Return Reversal.” Journal of Banking and Finance, 37 no. 7 (2013).

Idzorek, Thomas. “A Step-by-Step Guide to the Black-Litterman Model.” Ibbotson Associates (2005).

Ilmanen, Antti. “Expected Returns on Major Asset Classes.” Research Foundation of CFA Institute (2012).

Jegadeesh, Narasimhan, and Sheridan Titman. "Returns to Buying Winners and Selling Losers: Implications for Stock Market Efficiency." The Journal of Finance48, no. 1 (12 1993).

Kahneman, Daniel. Thinking, Fast and Slow. New York: Farrar, Straus and Giroux, 2011.

Kruger, Justin, and David Dunning. "Unskilled and Unaware of It: How Difficulties in Recognizing One's Own Incompetence Lead to Inflated Self-assessments." Journal of Personality and Social Psychology77, no. 6 (12 1999).

Kuhnen, Camelia M., and Brian Knutson. "The Neural Basis of Financial Risk Taking." Neuron 47, no. 5 (12 2005).

Malkiel, Burton. A Random Walk Down Wall Street: Time-Tested Strategies for Successful Investing (Tenth Edition). New York: W.W. Norton & Company, 2012.

Mankiw, N.gregory, and Stephen P. Zeldes. "The Consumption of Stockholders and Nonstockholders." Journal of Financial Economics 29, no. 1 (12 1991).

Markowitz, Harry M. "Market Efficiency: A Theoretical Distinction and So What?" Financial Analysts Journal 61, no. 5 (12 2005).

McLean, David and Pontiff, Jeffrey. “Does Academic Research Destroy Stock Return Predictability?” Working Paper, (2013).

Mitchell, Mark, and Todd Pulvino. "Characteristics of Risk and Return in Risk Arbitrage." The Journal of Finance 56, no. 6 (12 2001).

Novy-Marx, Robert. "The Other Side of Value: The Gross Profitability Premium." Journal of Financial Economics 108, no. 1 (12 2013).

Pastor, Lubos and Stambaugh, Robert. “Liquidity Risk and Expected Stock Returns.” The Journal of Political Economy, 111 no. 3 (6, 2003).

Preston, Caroline E., and Stanley Harris. "Psychology of Drivers in Traffic Accidents." Journal of Applied Psychology 49, no. 4 (12 1965).

Setton, Dolly. The Berkshire Bunch.” Forbes, October 12, 1998.

Shleifer, Andrei, and Robert W. Vishny. "The Limits of Arbitrage."The Journal of Finance 52, no. 1 (12 1997).

Siegel, Jeremy J. Stocks for the Long Run: The Definitive Guide to Financial Market Returns and Long-term Investment Strategies (Forth Edition). New York: McGraw-Hill, 2008.

Shiller, Robert. “Understanding Recent Trends in House Prices and Homeownership.” Housing, Housing Finance and Monetary Policy, Jackson Hole Conference Series, Federal Reserve Bank of Kansas City, 2008, pp. 85-123

Shiller, Robert. “Efficient Markets vs. Excess Volatility.” Yale. Accessed April 09, 2014. http://oyc.yale.edu/economics/econ-252-08/lecture-6

Shin, Yongseok. “Financial Markets: An Engine for Economic Growth.” The Regional Economist (July 2013).

Stillwell, William G., F.hutton Barron, and Ward Edwards. "Evaluating Credit Applications: A Validation of Multiattribute Utility Weight Elicitation Techniques."Organizational Behavior and Human Performance 32, no. 1 (12 1983).

Svenson, Ola. "Are We All Less Risky and More Skillful than Our Fellow Drivers?" Acta Psychologica47, no. 2 (12 1981).

Swets, J. A., R. M. Dawes, and J. Monahan. "Psychological Science Can Improve Diagnostic Decisions."Psychological Science in the Public Interest 1, no. 1 (12, 2000).

Thaler, Richard. "Anomalies: Seasonal Movements in Security Prices II: Weekend, Holiday, Turn of the Month, and Intraday Effects."Journal of Economic Perspectives1, no. 2 (12 1987).

Thornton, B. "Toward a Linear Prediction Model of Marital Happiness." Personality and Social Psychology Bulletin 3, no. 4 (12, 1977).

"U.S. Stock Ownership Stays at Record Low." Gallup. Accessed April 09, 2014. http://www.gallup.com/poll/162353/stock-ownership-stays-record-low.aspx.

Walkshäusl, Christian, and Sebastian Lobe. "Fundamental Indexing around the World." Review of Financial Economics 19, no. 3 (12 2010).

Zuckerman, Ezra W., and John T. Jost. "What Makes You Think You're so Popular? Self-Evaluation Maintenance and the Subjective Side of the "Friendship Paradox""Social Psychology Quarterly 64, no. 3 (12 2001).

 

Zweig, Jason. Your Money and Your Brain: How the New Science of Neuroeconomics Can Help Make You Rich. New York: Simon & Schuster, 2007.

 

Afterword/Acknowledgements

 

I wish to thank Romeo Stevens for the feedback and proofreading he provided for early drafts of this paper. You should go buy his Mealsquares (just look how happy I look eating them there!)

 

If the section on statistical prediction rules sounded familiar it's probably because I stole all the examples from this Less Wrong article by lukeprog about them. After you're done giving this article karma you should go give that one some more.

 

After I made my South Bay meetup presentation Peter McCluskey wrote on the Bay Area LW mailing list that "Your paper's report of 'a massive study of the sixteen countries that had data on local stock, bond, and cash returns available for every year of the twentieth century' could be considered a study of survivorship bias, in that it uses criteria which exclude countries where stocks lost 100% at some point (Russia, Poland, China, Hungary)." This is a good point and is worth addressing, which some researchers have done in recent years. Dimson, Marsh, and Staunton (2006) find that the surviving markets of the 20th century I cite in my paper dominated the global market capitalization in 1900 and the effect of national stock-market implosions was mostly negligible on worldwide averages. Peter did go on to say that "I don't know of better advice for the average person than to invest in equities, and I have most of my wealth in equities..." so I think we're mostly on the same page at least in terms of practical advice.

 

In a conversation with Alyssa Vance she similarly expressed skepticism that the equity risk premium has been significantly greater than zero due to the fact that at some point in the 20th century most major economies experienced double-digit inflation and very high marginal rates of taxation on capital income. It is true that taxes and inflation significantly dilute an investor's return, and one would be foolish to ignore their effects. But while they may reduce the absolute attractiveness of equities, the effects of taxes and inflation actually make stocks look more attractive relative to the alternatives of bonds and cash investments. In the US and most jurisdictions, the dividends and capital gains earned on stocks are taxed at preferential rates relative to the interest earned on fixed income investments, which is typically taxed as ordinary income. Furthermore, the majority of individual investors hold a large fraction of their investments in tax-sheltered accounts (such as 401(k)s and IRAs in the US).

 

At my South Bay meetup presentation, Patrick LaVictoire (among others) expressed incredulity at my claim that retail investors have on average badly underperformed relevant benchmarks and that by implication institutional investors have outperformed. The source I cite in my paper is gated but there is plenty of research on actual investor performance. Morningstar regularly publishes info on how investors routinely underperform the mutual funds they invest in by buying into and selling out of them at the wrong times. Finding data on institutional investors is a little trickier but Busse, Goyal, and Wahal (2010) find that institutional investors managing e.g. pensions, foundations, and endowments on average outperform the broad US equity market in the US equity sleeve of their portfolios. (the language of that paper sounds much more pessimistic, with "alphas are statistically indistinguishable from zero" in the abstract. The key is that they are controlling for the size, value, and momentum effects discussed in my paper. In other words, once we account for the fact that institutional investors are taking advantage of the factor premiums that have been shown to most consistently outperform a simple index strategy, they aren't providing any extra value. This ties in with the idea of "shrinking alpha" or "smart beta" that is currently en vogue in my industry.)

 

I'm happy to address further questions and criticisms in the comments.

[LINK] Article in the Guardian about CSER, mentions MIRI and paperclip AI

18 Sarokrae 30 August 2014 02:04PM

http://www.theguardian.com/technology/2014/aug/30/saviours-universe-four-unlikely-men-save-world

The article is titled "The scientific A-Team saving the world from killer viruses, rogue AI and the paperclip apocalypse", and features interviews with Martin Rees, Huw Price, Jaan Tallinn and Partha Dasgupta. The author takes a rather positive tone about CSER and MIRI's endeavours, and mentions x-risks other than AI (bioengineered pandemic, global warming with human interference, distributed manufacturing).

I find it interesting that the inferential distance for the layman to the concept of paperclipping AI is much reduced by talking about paperclipping America, rather than the entire universe: though the author admits still struggling with the concept. Unusually for an journalist who starts off unfamiliar with these concepts, he writes in a tone that suggests that he takes the ideas seriously, without the sort of "this is very far-fetched and thus I will not lower myself to seriously considering it" countersignalling usually seen with x-risk coverage. There is currently the usual degree of incredulity in the comments section though.

For those unfamiliar with The Guardian, it is a British left-leaning newspaper with a heavy focus on social justice and left-wing political issues. 

The Great Filter is early, or AI is hard

18 Stuart_Armstrong 29 August 2014 04:17PM

Attempt at the briefest content-full Less Wrong post:

Once AI is developed, it could "easily" colonise the universe. So the Great Filter (preventing the emergence of star-spanning civilizations) must strike before AI could be developed. If AI is easy, we could conceivably have built it already, or we could be on the cusp of building it. So the Great Filter must predate us, unless AI is hard.

Calibrating your probability estimates of world events: Russia vs Ukraine, 6 months later.

17 shminux 28 August 2014 11:37PM

Some of the comments on the link by James_Miller exactly six months ago provided very specific estimates of how the events might turn out:

James_Miller:

  • The odds of Russian intervening militarily = 40%.
  • The odds of the Russians losing the conventional battle (perhaps because of NATO intervention) conditional on them entering = 30%.
  • The odds of the Russians resorting to nuclear weapons conditional on them losing the conventional battle = 20%.

Me:

"Russians intervening militarily" could be anything from posturing to weapon shipments to a surgical strike to a Czechoslovakia-style tank-roll or Afghanistan invasion. My guess that the odds of the latter is below 5%.

A bet between James_Miller and solipsist:

I will bet you $20 U.S. (mine) vs $100 (yours) that Russian tanks will be involved in combat in the Ukraine within 60 days. So in 60 days I will pay you $20 if I lose the bet, but you pay me $100 if I win.

While it is hard to do any meaningful calibration based on a single event, there must be lessons to learn from it. Given that Russian armored columns are said to capture key Ukrainian towns today, the first part of James_Miller's prediction has come true, even if it took 3 times longer than he estimated.

Note that even the most pessimistic person in that conversation (James) was probably too optimistic. My estimate of 5% appears way too low in retrospect, and I would probably bump it to 50% for a similar event in the future.

Now, given that the first prediction came true, how would one reevaluate the odds of the two further escalations he listed? I still feel that there is no way there will be a "conventional battle" between Russia and NATO, but having just been proven wrong makes me doubt my assumptions. If anything, maybe I should give more weight to what James_Miller (or at least Dan Carlin) has to say on the issue. And if I had any skin in the game, I would probably be even more cautious.


A proof of Löb's theorem in Haskell

16 cousin_it 19 September 2014 01:01PM

I'm not sure if this post is very on-topic for LW, but we have many folks who understand Haskell and many folks who are interested in Löb's theorem (see e.g. Eliezer's picture proof), so I thought why not post it here? If no one likes it, I can always just move it to my own blog.

A few days ago I stumbled across a post by Dan Piponi, claiming to show a Haskell implementation of something similar to Löb's theorem. Unfortunately his code had a couple flaws. It was circular and relied on Haskell's laziness, and it used an assumption that doesn't actually hold in logic (see the second comment by Ashley Yakeley there). So I started to wonder, what would it take to code up an actual proof? Wikipedia spells out the steps very nicely, so it seemed to be just a matter of programming.

Well, it turned out to be harder than I thought.

One problem is that Haskell has no type-level lambdas, which are the most obvious way (by Curry-Howard) to represent formulas with propositional variables. These are very useful for proving stuff in general, and Löb's theorem uses them to build fixpoints by the diagonal lemma.

The other problem is that Haskell is Turing complete, which means it can't really be used for proof checking, because a non-terminating program can be viewed as the proof of any sentence. Several people have told me that Agda or Idris might be better choices in this regard. Ultimately I decided to use Haskell after all, because that way the post would be understandable to a wider audience. It's easy enough to convince yourself by looking at the code that it is in fact total, and transliterate it into a total language if needed. (That way you can also use the nice type-level lambdas and fixpoints, instead of just postulating one particular fixpoint as I did in Haskell.)

But the biggest problem for me was that the Web didn't seem to have any good explanations for the thing I wanted to do! At first it seems like modal proofs and Haskell-like languages should be a match made in heaven, but in reality it's full of subtle issues that no one has written down, as far as I know. So I'd like this post to serve as a reference, an example approach that avoids all difficulties and just works.

LW user lmm has helped me a lot with understanding the issues involved, and wrote a candidate implementation in Scala. The good folks on /r/haskell were also very helpful, especially Samuel Gélineau who suggested a nice partial implementation in Agda, which I then converted into the Haskell version below.

To play with it online, you can copy the whole bunch of code, then go to CompileOnline and paste it in the edit box on the left, replacing what's already there. Then click "Compile & Execute" in the top left. If it compiles without errors, that means everything is right with the world, so you can change something and try again. (I hate people who write about programming and don't make it easy to try out their code!) Here we go:

main = return ()
-- Assumptions
data Theorem a
logic1 = undefined :: Theorem (a -> b) -> Theorem a -> Theorem b logic2 = undefined :: Theorem (a -> b) -> Theorem (b -> c) -> Theorem (a -> c) logic3 = undefined :: Theorem (a -> b -> c) -> Theorem (a -> b) -> Theorem (a -> c)
data Provable a
rule1 = undefined :: Theorem a -> Theorem (Provable a) rule2 = undefined :: Theorem (Provable a -> Provable (Provable a)) rule3 = undefined :: Theorem (Provable (a -> b) -> Provable a -> Provable b)
data P
premise = undefined :: Theorem (Provable P -> P)
data Psi
psi1 = undefined :: Theorem (Psi -> (Provable Psi -> P)) psi2 = undefined :: Theorem ((Provable Psi -> P) -> Psi)
-- Proof
step3 :: Theorem (Psi -> Provable Psi -> P) step3 = psi1
step4 :: Theorem (Provable (Psi -> Provable Psi -> P)) step4 = rule1 step3
step5 :: Theorem (Provable Psi -> Provable (Provable Psi -> P)) step5 = logic1 rule3 step4
step6 :: Theorem (Provable (Provable Psi -> P) -> Provable (Provable Psi) -> Provable P) step6 = rule3
step7 :: Theorem (Provable Psi -> Provable (Provable Psi) -> Provable P) step7 = logic2 step5 step6
step8 :: Theorem (Provable Psi -> Provable (Provable Psi)) step8 = rule2
step9 :: Theorem (Provable Psi -> Provable P) step9 = logic3 step7 step8
step10 :: Theorem (Provable Psi -> P) step10 = logic2 step9 premise
step11 :: Theorem ((Provable Psi -> P) -> Psi) step11 = psi2
step12 :: Theorem Psi step12 = logic1 step11 step10
step13 :: Theorem (Provable Psi) step13 = rule1 step12
step14 :: Theorem P step14 = logic1 step10 step13
-- All the steps squished together
lemma :: Theorem (Provable Psi -> P) lemma = logic2 (logic3 (logic2 (logic1 rule3 (rule1 psi1)) rule3) rule2) premise
theorem :: Theorem P theorem = logic1 lemma (rule1 (logic1 psi2 lemma))

To make sense of the code, you should interpret "Theorem" as the symbol ⊢ from the Wikipedia proof, and "Provable" as the symbol ☐. All the assumptions have value "undefined" because we don't care about their computational content, only their types. The assumptions logic1..3 give just enough propositional logic for the proof to work, while rule1..3 are direct translations of the three rules from Wikipedia. The assumptions psi1 and psi2 describe the specific fixpoint used in the proof, because adding general fixpoint machinery would make the code much more complicated. The statements P and Psi, of course, correspond to P and Ψ, and "premise" is the premise of the whole theorem, that is, ⊢(☐P→P). The conclusion of the theorem can be seen in the type of step14.

As for the "squished" version, I guess I wrote it just to satisfy my refactoring urge. I don't recommend anyone to try reading that, except maybe to marvel at the complexity :-)

A reason to see the future

16 Eneasz 05 September 2014 07:33PM

I just learned of The Future Library project. In short, famous authors will be asked to write new, original fiction that will not be released until 2114. First one announced was Margaret Atwood, of The Handmaiden's Tale fame.

I learned of this when a friend posted on Facebook that "I'm officially looking into being cryogenically frozen due to The Future Library project. See you all in 2114." She meant it as a joke, but after a couple comments she now knows about CI, and she didn't yesterday.

What's one of the most common complaints we hear from Deathists? The future is unknown and scary and there won't be anything there they'd be interested in anyway. Now there will be, if they're Atwood fans.

What's one of the ways artists who give away most of their work (almost all of them nowadays) try to entice people to pay for their albums/books/games/whatever? Including special content that is only available for people who pay (or who pay more). Now there is special content only available for people who are around post-2113.

Which got me to thinking... could we incentivize seeing the future? I know it sounds kinda silly ("What, escaping utter annihilation isn't incentive enough??"), but it seems possible that we could save lives by compiling original work from popular artists (writers, musicians, etc), sealing it tight somewhere, and promising to release it in 100, 200, maybe 250 years. And of course, providing links to cryo resources with all publicity materials.

Would this be worth pursuing? Are there any obvious downsides, aside from cost & difficulty?

Another type of intelligence explosion

16 Stuart_Armstrong 21 August 2014 02:49PM

I've argued that we might have to worry about dangerous non-general intelligences. In a series of back and forth with Wei Dai, we agreed that some level of general intelligence (such as that humans seem to possess) seemed to be a great advantage, though possibly one with diminishing returns. Therefore a dangerous AI could be one with great narrow intelligence in one area, and a little bit of general intelligence in others.

The traditional view of an intelligence explosion is that of an AI that knows how to do X, suddenly getting (much) better at doing X, to a level beyond human capacity. Call this the gain of aptitude intelligence explosion. We can prepare for that, maybe, by tracking the AI's ability level and seeing if it shoots up.

But the example above hints at another kind of potentially dangerous intelligence explosion. That of a very intelligent but narrow AI that suddenly gains intelligence across other domains. Call this the gain of function intelligence explosion. If we're not looking specifically for it, it may not trigger any warnings - the AI might still be dumber than the average human in other domains. But this might be enough, when combined with its narrow superintelligence, to make it deadly. We can't ignore the toaster that starts babbling.

[Link] 3 Short Walking Breaks Can Reverse Harm From 3 Hours of Sitting

15 Gunnar_Zarncke 10 September 2014 10:26AM

I found the below link which is in the spirit of Lifestyle interventions to increase longevity:

3 Short Walking Breaks Can Reverse Harm From 3 Hours of Sitting"

The /.-summary:

Medical researchers have been steadily building evidence that prolonged sitting is awful for your health. One major problem is that blood can pool in the legs of a seated person, causing arteries to start losing their ability to control the rate of blood flow. A new experimental study (abstract) has discovered it's quite easy to negate these detrimental health effects: all you need to do is take a leisurely, 5-minute walk for every hour you sit. "The researchers were able to demonstrate that during a three-hour period, the flow-mediated dilation, or the expansion of the arteries as a result of increased blood flow, of the main artery in the legs was impaired by as much as 50 percent after just one hour. The study participants who walked for five minutes for each hour of sitting saw their arterial function stay the same — it did not drop throughout the three-hour period. Thosar says it is likely that the increase in muscle activity and blood flow accounts for this."

One way to incorporate this into ones habits is to use WorkRave.

 

 

 

Solstice 2014 / Rational Ritual Retreat - A Call to Arms

15 Raemon 30 August 2014 05:51PM


Summary:

 •  I'm beginning work on the 2014 Winter Solstice. There are a lot of jobs to be done, and the more people who can dedicate serious time to it, the better the end result will be and the more locations it can take place. A few people have volunteered serious time, and I wanted to issue a general call, to anyone who's wanted to be part of this but wasn't sure how. Send me an e-mail at raemon777@gmail.com if you'd like to help with any of the tasks listed below (or others I haven't thought of).

 •  More generally, I think people working on rational ritual, in any form, should be sharing notes and collaborating more. There's a fair number of us, but we're scattered across the country and haven't really felt like part of the same team. And it seems a bit silly for people working on ritual, to be scattered and unified. So I am hosting the first Rational Ritual Retreat at the end of September. The exact date and location have yet to be determined. You can apply at humanistculture.com, noting your availability, and I will determine



The Rational Ritual Retreat

For the past three years, I've been running a winter solstice holiday, celebrating science and human achievement. Several people have come up to me and told me it was one of the most unique, profound experiences they've participated in, inspiring them to work harder to make sure humanity has a bright future. 

I've also had a number of people concerned that I'm messing with dangerous aspects of human psychology, fearing what will happen to a rationality community that gets involved with ritual.

Both of these thoughts are incredibly important. I've written a lot on the value and danger of ritual. [1]

Ritual is central to the human experience. We've used it for thousands of years to bind groups together. It helps us internalize complex ideas. A winning version of rationality needs *some* way of taking complex ideas and getting System 1 to care about them, and I think ritual is at least one tool we should consider.

In the past couple weeks, a few thoughts occurred to me at once:

1) Figuring out a rational approach to ritual that has a meaningful, useful effect on the world will require a lot of coordination among many skilled people.

2) If this project *were* to go badly somehow, I think the most likely reason would be someone copying parts of what I'm working on without understanding all the considerations that went into it, and creating a toxic (or hollow) variant that spirals out of control.

3) Many other people have approached the concept of rational ritual. But we've generally done so independently, often duplicating a lot of the same work and rarely moving on to more interesting and valuable experimentation. When we do experiment, we rarely share notes.

This all prompted a fourth realization:

4) If ritual designers are isolated and poorly coordinated... if we're duplicating a lot of the same early work and not sharing concerns about potential dangers, then one obvious (in retrospect) solution is to have a ritual about ritual creation.

So, the Rational Ritual Retreat. We'll hike out into a dark sky reserve, when there's no light pollution and the Milky Way looms large and beautiful above us. We'll share our stories, our ideas for a culture grounded in rationality yet tapped into our primal human desires. Over the course of an evening we'll create a ceremony or two together, through group consensus and collaboration. We'll experiment with new ideas, aware that some may work well, and some may not - that's how progress is made.

This is my experiment, attempting to answer the question Eliezer raised in "Bayesians vs Barbarians." It just seems really exceptionally silly to me that people motivated by rationality AND ritual should be so uncoordinated. 

Whether you're interested directly creating ritual, or helping to facilitate its creation in one way or another (helping with art, marketing, logistics or funding of future projects), you are invited to attend. The location is currently undecided - there are reasons to consider the West Coast, East Coast or (if there's enough interest in both locations) both. 

Send in a brief application so I can make decisions about where and when to host it. I'll make the final decisions this upcoming Friday.

 


The Winter Solstice

The Retreat is part of a long-term vision, of many people coming together to produce a culture (undoubtably, with numerous subcultures focusing on different aesthetics). Tentatively, I'd expect a successful rational-ritual culture to look sort of Open Source ish. (Or, more appropriately - I'd expect it to look like Burning Man. To be clear, Burning Man and variations already exist, my goal is not to duplicate that effort. It's to create something that's a) easier to integrate into people's lives, and b) specifically focuses on rationality and human progress)

The Winter Solstice project as (at least for now) an important piece of that, partly because of the particular ideas it celebrates, but also because it's a demonstration of how you create *any* cultural holiday from scratch that celebrates serious ideas in a non-ironic fashion.

My minimum goal this year is to finish the Hymnal, put more material online to help people create their own private events, and run another largish event in NYC. My stretch goals are to have a high quality public event in Boston and San Francisco. (Potentially other places if a lot of local people are interested and are willing to do the legwork). 

My hope, to make those stretch goals possible, is to find collaborators willing to put in a fair amount of work. I'm specifically looking for people who can:

  • Creative Collaboration. Want to perform, create music, visual art, or host an event in your city?
  • Help with logistics, especially in different cities. (Finding venues, arranging catering, etc)
  • Marketing, reaching out to bloggers, or creating images or videos for the social media campaign.
  • Helping with technical aspects of production for the Hymnal (editing, figuring out best places

Each of these are things I'm able to do, but I have limited time, and the more time I can focus on creating

If you're interested in collaborating, volunteering, or running a local event, either reply here or send me an e-mail at raemon777@gmail.com 

 

 

[LINK] Could a Quantum Computer Have Subjective Experience?

15 shminux 26 August 2014 06:55PM

Yet another exceptionally interesting blog post by Scott Aaronson, describing his talk at the Quantum Foundations of a Classical Universe workshop, videos of which should be posted soon. Despite the disclaimer "My talk is for entertainment purposes only; it should not be taken seriously by anyone", it raises several serious and semi-serious points about the nature of conscious experience and related paradoxes, which are generally overlooked by the philosophers, including Eliezer, because they have no relevant CS/QC expertise. For example:

  • Is an FHE-encrypted sim with a lost key conscious?
  • If you "untorture" a reversible simulation, did it happen? What does the untorture feel like?
  • Is Vaidman brain conscious? (You have to read the blog post to learn what it is, not going to spoil it.)

Scott also suggests a model of consciousness which sort-of resolves the issues of cloning, identity and such, by introducing what he calls a "digital abstraction layer" (again, read the blog post to understand what he means by that). Our brains might be lacking such a layer and so be "fundamentally unclonable". 

Another interesting observation is that you never actually kill the cat in the Schroedinger's cat experiment, for a reasonable definition of "kill".

There are several more mind-blowing insights in this "entertainment purposes" post/talk, related to the existence of p-zombies, consciousness of Boltzmann brains, the observed large-scale structure of the Universe and the "reality" of Tegmark IV.

I certainly got the humbling experience that Scott is the level above mine, and I would like to know if other people did, too.

Finally, the standard bright dilettante caveat applies: if you think up a quick objection to what an expert in the area argues, and you yourself are not such an expert, the odds are extremely heavy that this objection is either silly or has been considered and addressed by the expert already. 

 

Overcoming Decision Anxiety

14 TimMartin 11 September 2014 04:22AM

I get pretty anxious about open-ended decisions. I often spend an unacceptable amount of time agonizing over things like what design options to get on a custom suit, or what kind of job I want to pursue, or what apartment I want to live in. Some of these decisions are obviously important ones, with implications for my future happiness. However, in general my sense of anxiety is poorly calibrated with the importance of the decision. This makes life harder than it has to be, and lowers my productivity.


I moved apartments recently, and I decided that this would be a good time to address my anxiety about open-ended decisions. My hope is to present some ideas that will be helpful for others with similar anxieties, or to stimulate helpful discussion.


Solutions

 

Exposure therapy

One promising way of dealing with decision anxiety is to practice making decisions without worrying about them quite so much. Match your clothes together in a new way, even if you're not 100% sure that you like the resulting outfit. Buy a new set of headphones, even if it isn't the “perfect choice.” Aim for good enough. Remind yourself that life will be okay if your clothes are slightly mismatched for one day.

This is basically exposure therapy – exposing oneself to a slightly aversive stimulus while remaining calm about it. Doing something you're (mildly) afraid to do can have a tremendously positive impact when you try it and realize that it wasn't all that bad. Of course, you can always start small and build up to bolder activities as your anxieties diminish.

For the past several months, I had been practicing this with small decisions. With the move approaching in July, I needed some more tricks for dealing with a bigger, more important decision.

Reasoning with yourself

It helps to think up reasons why your anxieties aren't justified. As in actual, honest-to-goodness reasons that you think are true. Check out this conversation between my System 1 and System 2 that happened just after my roommates and I made a decision on an apartment:

System 1: Oh man, this neighborhood [the old neighborhood] is such a great place to go for walks. It's so scenic and calm. I'm going to miss that. The new neighborhood isn't as pretty.
System 2: Well that's true, but how many walks did we actually take in five years living in the old neighborhood? If I recall correctly, we didn't even take two per year.
System 1: Well, yeah... but...
System 2: So maybe “how good the neighborhood is for taking walks” isn't actually that important to us. At least not to the extent that you're feeling. There were things that we really liked about our old living situation, but taking walks really wasn't one of them.
System 1: Yeah, you may be right...

Of course, this “conversation” took place after the decision had already been made. But making a difficult decision often entails second-guessing oneself, and this too can be a source of great anxiety. As in the above, I find that poking holes in my own anxieties really makes me feel better. I do this by being a good skeptic and turning on my critical thinking skills – only instead of, say, debunking an article on pseudoscience, I'm debunking my own worries about how bad things are going to be. This helps me remain calm.

Re-calibration

The last piece of this process is something that should help when making future decisions. I reasoned that if my System 1 feels anxiety about things that aren't very important – if it is, as I said, poorly calibrated – then I perhaps I can re-calibrate it.

Before moving apartments, I decided to make predictions about what aspects of the new living situation would affect my happiness. “How good the neighborhood is for walks” may not be important to me, but surely there are some factors that are important. So I wrote down things that I thought would be good and bad about the new place. I also rated them on how good or bad I thought they would be.

In several months, I plan to go back over that list and compare my predicted feelings to my actual feelings. What was I right about? This will hopefully give my System 1 a strong impetus to re-calibrate, and only feel anxious about aspects of a decision that are strongly correlated with my future happiness.

Future Benefits

I think we each carry in our heads a model of what is possible for us to achieve, and anxiety about the choices we make limits how bold we can be in trying new things. As a result, I think that my attempts to feel less anxiety about decisions will be very valuable to me, and allow me to do things that I couldn't do before. At the same time, I expect that making decisions of all kinds will be a quicker and more pleasant process, which is a great outcome in and of itself.

What steep learning curve do you wish you'd climbed sooner?

13 Stabilizer 04 September 2014 12:03AM

This is the question asked by John Cook on Twitter. He lists responses from different people:

  • R
  • Version control
  • Linear algebra
  • Advanced math
  • Bayesian statistics
  • Category theory
  • Foreign languages
  • How to not waste time
  • Women

Mine are: quantum mechanics, Python, cooking, the language of philosophy.

What learning curve do you wish you'd climbed sooner? Give reasons and stories if you feel like it. Do you think other people should climb the same curves?

Superintelligence reading group

13 KatjaGrace 31 August 2014 02:59PM

In just over two weeks I will be running an online reading group on Nick Bostrom's Superintelligence, on behalf of MIRI. It will be here on LessWrong. This is an advance warning, so you can get a copy and get ready for some stimulating discussion. MIRI's post, appended below, gives the details.

Added: At the bottom of this post is a list of the discussion posts so far.


Nick Bostrom’s eagerly awaited Superintelligence comes out in the US this week. To help you get the most out of it, MIRI is running an online reading group where you can join with others to ask questions, discuss ideas, and probe the arguments more deeply.

The reading group will “meet” on a weekly post on the LessWrong discussion forum. For each ‘meeting’, we will read about half a chapter of Superintelligence, then come together virtually to discuss. I’ll summarize the chapter, and offer a few relevant notes, thoughts, and ideas for further investigation. (My notes will also be used as the source material for the final reading guide for the book.)

Discussion will take place in the comments. I’ll offer some questions, and invite you to bring your own, as well as thoughts, criticisms and suggestions for interesting related material. Your contributions to the reading group might also (with permission) be used in our final reading guide for the book.

We welcome both newcomers and veterans on the topic. Content will aim to be intelligible to a wide audience, and topics will range from novice to expert level. All levels of time commitment are welcome.

We will follow this preliminary reading guide, produced by MIRI, reading one section per week.

If you have already read the book, don’t worry! To the extent you remember what it says, your superior expertise will only be a bonus. To the extent you don’t remember what it says, now is a good time for a review! If you don’t have time to read the book, but still want to participate, you are also welcome to join in. I will provide summaries, and many things will have page numbers, in case you want to skip to the relevant parts.

If this sounds good to you, first grab a copy of Superintelligence. You may also want to sign up here to be emailed when the discussion begins each week. The first virtual meeting (forum post) will go live at 6pm Pacific on Monday, September 15th. Following meetings will start at 6pm every Monday, so if you’d like to coordinate for quick fire discussion with others, put that into your calendar. If you prefer flexibility, come by any time! And remember that if there are any people you would especially enjoy discussing Superintelligence with, link them to this post!

Topics for the first week will include impressive displays of artificial intelligence, why computers play board games so well, and what a reasonable person should infer from the agricultural and industrial revolutions.


Posts in this sequence

Week 1: Past developments and present capabilities

Everybody's talking about machine ethics

12 sbenthall 17 September 2014 05:20PM

There is a lot of mainstream interest in machine ethics now. Here are some links to some popular articles on this topic.

By Zeynep Tufecki, a professor at the I School at UNC, on Facebook's algorithmic newsfeed curation and why Twitter should not implement the same.

By danah boyd, claiming that 'tech folks' are designing systems that implement an idea of fairness that comes from neoliberal ideology.

danah boyd (who spells her name with no capitalization) runs the Data & Society, a "think/do tank" that aims to study this stuff. They've recently gotten MacArthur Foundation funding for studying the ethical and political impact of intelligent systems. 

A few observations:

First, there is no mention of superintelligence or recursively self-modifying anything. These scholars are interested in how, in the near future, the already comparatively powerful machines have moral and political impact on the world.

Second, these groups are quite bad at thinking in a formal or mechanically implementable way about ethics. They mainly seem to recapitulate the same tired tropes that have been resonating through academia for literally decades. On the contrary, mathematical formulation of ethical positions appears to be ya'll's specialty.

Third, however much the one-true-morality may be indeterminate or presently unknowable, progress towards implementable descriptions of various plausible moral positions could at least be incremental steps forward towards an understanding of how to achieve something better. Considering a slow take-off possible future, iterative testing and design of ethical machines with high computational power seems like low-hanging fruit that could only better inform longer-term futurist thought.

Personally, I try to do work in this area and find the lack of serious formal work in this area deeply disappointing. This post is a combination heads up and request to step up your game. It's go time.

 

Sebastian Benthall

PhD Candidate

UC Berkeley School of Infromation

What are you learning?

12 Viliam_Bur 15 September 2014 10:50AM

This is a thread to connect rationalists who are learning the same thing, so they can cooperate.

The "learning" doesn't necessarily mean "I am reading a textbook / learning an online course right now". It can be something you are interested in long-term, and still want to learn more.

 

Rules:

Top-level comments contain only the topic to learn. (Plus one comment for "meta" debate.) Only one topic per comment, for easier search. Try to find a reasonable level of specificity: too narrow topic means less people; too wide topic means more people who actually are interested in something different than you are.

Use the second-level comments if you are learning that topic. (Or if you are going to learn it now, not merely in the far future.) Technically, "me too" is okay in this thread, but providing more info is probably more useful. For example: What are you focusing on? What learning materials you use? What is your goal?

Third- and deeper-level comments, that's debate as usual.

An example of deadly non-general AI

12 Stuart_Armstrong 21 August 2014 02:15PM

In a previous post, I mused that we might be focusing too much on general intelligences, and that the route to powerful and dangerous intelligences might go through much more specialised intelligences instead. Since it's easier to reason with an example, here is a potentially deadly narrow AI (partially due to Toby Ord). Feel free to comment and improve on it, or suggest you own example.

It's the standard "pathological goal AI" but only a narrow intelligence. Imagine a medicine designing super-AI with the goal of reducing human mortality in 50 years - i.e. massively reducing human population in the next 49 years. It's a narrow intelligence, so it has access only to a huge amount of human biological and epidemiological research. It must gets its drugs past FDA approval; this requirement is encoded as certain physical reactions (no death, some health improvements) to people taking the drugs over the course of a few years.

Then it seems trivial for it to design a drug that would have no negative impact for the first few years, and then causes sterility or death. Since it wants to spread this to as many humans as possible, it would probably design something that interacted with common human pathogens - colds, flues - in order to spread the impact, rather than affecting only those that took the disease.

Now, this narrow intelligence is less threatening than if it had general intelligence - where it could also plan for possible human countermeasures and such - but it seems sufficiently dangerous on its own that we can't afford to worry only about general intelligences. Some of the "AI superpowers" that Nick mentions in his book (intelligence amplification, strategizing, social manipulation, hacking, technology research, economic productivity) could be enough to cause devastation on their own, even if the AI never developed other abilities.

We still could be destroyed by a machine that we outmatch in almost every area.

What are your contrarian views?

11 Metus 15 September 2014 09:17AM

As per a recent comment this thread is meant to voice contrarian opinions, that is anything this community tends not to agree with. Thus I ask you to post your contrarian views and upvote anything you do not agree with based on personal beliefs. Spam and trolling still needs to be downvoted.

Should people be writing more or fewer LW posts?

11 John_Maxwell_IV 14 September 2014 07:40AM

It's unlikely that by pure chance we are currently writing the correct number of LW posts.  So it might be useful to try to figure out if we're currently writing too few or too many LW posts.  If commenters are evenly divided on this question then we're probably close to the optimal number; otherwise we have an opportunity to improve.  Here's my case for why we should be writing more posts.

Let's say you came up with a new and useful life hack, you have a novel line of argument on an important topic, or you stumbled across some academic research that seems valuable and isn't frequently discussed on Less Wrong.  How valuable would it be for you to share your findings by writing up at post for Less Wrong?

Recently I visited a friend of mine and commented on the extremely bright lights he had in his room.  He referenced this LW post written over a year ago.  That got me thinking.  The bright lights in my friend's room make his life better every day, for a small upfront cost.  And my friend is probably just one of tens or hundreds of people to use bright lights this way as a result of that post.  Given that the technique seems to be effective, that number will probably continue going up, and will grow exponentially via word of mouth (useful memes tend to spread).  So by my reckoning, chaosmage has created and will create a lot of utility.  If they had kept that idea to themselves, I suspect they would have captured less than 1% of the total value to be had from the idea.

You can reach orders of magnitude more people writing an obscure Less Wrong comment than you can talking to a few people at a party in person.  For example, at least 100 logged in users read this fairly obscure comment of mine.  So if you're going to discuss an important topic, it's often best to do it online.  Given enough eyeballs, all bugs in human reasoning are shallow.

Yes, peoples' time does have opportunity costs.  But people are on Less Wrong because they need a break anyway.  (If you're a LW addict, you might try the technique I describe in this post for dealing with your addiction.  If you're dealing with serious cravings, for LW or video games or drugs or anything else, perhaps look at N-acetylcysteine... a variety of studies suggest it helps reduce cravings (behavioral addictions are pretty similar to drug addictions neurologically btw), it has a good safety profile, and you can buy it on Amazon.  Not prescribed by doctors because it's not approved by the FDA.  Yes, you could use willpower (it's worked so well in the past...) or you could hit the "stop craving things as much" button, and then try using willpower.  Amazing what you can learn on Less Wrong isn't it?)

And LW does a good job of indexing content by how much utility people are going to get out of it.  It's easy to look at a post's keywords and score and guess if it's worth reading.  If your post is bad it will vanish in to obscurity and few will be significantly harmed.  (Unless it's bad and inflammatory, or bad with a linkbait title... please don't write posts like that.)  If your post is good, it will spread virally on its own and you'll generate untold utility.

Given that above-average posts get read much more than below-average posts, if you're post's expected quality is average, sharing it on Less Wrong has a high positive expected utility.  Like Paul Graham, I think we should be spreading our net wide and trying to capture all of the winners we can.

I'm going to call out a particular subset of LW commenters in particular.  If you're a commenter and you (a) have at least 100 karma, (b) it's over 80% positive, and (c) you have a draft post with valuable new ideas you've been sitting on for a while, you should totally polish it off and share it with us!  In general, the better your track record, the more you should be inclined to share ideas that seem valuable.  Worst case you can delete your post and cut your losses.

Persistent Idealism

11 jkaufman 26 August 2014 01:38AM

When I talk to people about earning to give, it's common to hear worries about "backsliding". Yes, you say you're going to go make a lot of money and donate it, but once you're surrounded by rich coworkers spending heavily on cars, clothes, and nights out, will you follow through? Working at a greedy company in a selfishness-promoting culture you could easily become corrupted and lose initial values and motivation.

First off, this is a totally reasonable concern. People do change, and we are pulled towards thinking like the people around us. I see two main ways of working against this:

  1. Be public with your giving. Make visible commitments and then list your donations. This means that you can't slowly slip away from giving; either you publish updates saying you're not going to do what you said you would, or you just stop updating and your pages become stale. By making a public promise you've given friends permission to notice that you've stopped and ask "what changed?"
  2. Don't just surround yourself with coworkers. Keep in touch with friends and family. Spend some time with other people in the effective altruism movement. You could throw yourself entirely into your work, maximizing income while sending occasional substantial checks to GiveWell's top picks, but without some ongoing engagement with the community and the research this doesn't seem likely to last.

One implication of the "won't you drift away" objection, however, is often that if instead of going into earning to give you become an activist then you'll remain true to your values. I'm not so sure about this: many people who are really into activism and radical change in their 20s have become much less ambitious and idealistic by their 30s. You can call it "burning out" or "selling out" but decreasing idealism with age is very common. This doesn't mean people earning to give don't have to worry about losing their motivation—in fact it points the opposite way—but this isn't a danger unique to the "go work at something lucrative" approach. Trying honestly to do the most good possible is far from the default in our society, and wherever you are there's going to be pressure to do the easy thing, the normal thing, and stop putting so much effort into altruism.

Productivity thoughts from Matt Fallshaw

11 John_Maxwell_IV 21 August 2014 05:05AM

At the 2014 Effective Altruism Summit in Berkeley a few weeks ago, I had the pleasure of talking to Matt Fallshaw about the things he does to be more effective.  Matt is a founder of Trike Apps (the consultancy that built Less Wrong), a founder of Bellroy, and a polyphasic sleeper.  Notes on our conversation follow.

Matt recommends having a system for acquiring habits.  He recommends separating collection from processing; that is, if you have an idea for a new habit you want to acquire, you should record the idea at the time you have it and then think about actually implementing it at some future time.  Matt recommends doing this through a weekly review.  He recommends vetting your collection to see what habits seem actually worth acquiring, then for those habits you actually want to acquire, coming up with a compassionate, reasonable plan for how you're going to acquire the habit.

(Previously on LW: How habits work and how you may control themCommon failure modes in habit formation.)

The most difficult kind of habit for me to acquire is that of random-access situation-response habits, e.g. "if I'm having a hard time focusing, read my notebook entry that lists techniques for improving focus".  So I asked Matt if he had any habit formation advice for this particular situation.  Matt recommended trying to actually execute the habit I wanted as many times as possible, even in an artificial context.  Steve Pavlina describes the technique here.  Matt recommends making your habit execution as emotionally salient as possible.  His example: Let's say you're trying to become less of a prick.  Someone starts a conversation with you and you notice yourself experiencing the kind of emotions you experience before you start acting like a prick.  So you spend several minutes explaining to them the episode of disagreeableness you felt coming on and how you're trying to become less of a prick before proceeding with the conversation.  If all else fails, Matt recommends setting a recurring alarm on your phone that reminds you of the habit you're trying to acquire, although he acknowledges that this can be expensive.

Part of your plan should include a check to make sure you actually stick with your new habit.  But you don't want a check that's overly intrusive.  Matt recommends keeping an Anki deck with a card for each of your habits.  Then during your weekly review session, you can review the cards Anki recommends for you.  For each card, you can rate the degree to which you've been sticking with the habit it refers to and do something to revitalize the habit if you haven't been executing it.  Matt recommends writing the cards in a form of a concrete question, e.g. for a speed reading habit, a question could be "Did you speed read the last 5 things you read?"  If you haven't been executing a particular habit, check to see if it has a clear, identifiable trigger.

Ideally your weekly review will come at a time you feel particularly "agenty" (see also: Reflective Control).  So you may wish to schedule it at a time during the week when you tend to feel especially effective and energetic.  Consuming caffeine before your weekly review is another idea.

When running in to seemingly intractable problems related to your personal effectiveness, habits, etc., Matt recommends taking a step back to brainstorm and try to think of creative solutions.  He says that oftentimes people will write off a task as "impossible" if they aren't able to come up with a solution in 30 seconds.  He recommends setting a 5-minute timer.

In terms of habits worth acquiring, Matt is a fan of speed reading, Getting Things Done, and the Theory of Constraints (especially useful for larger projects).

Matt has found that through aggressive habit acquisition, he's been able to experience a sort of compound return on the habits he's acquired: by acquiring habits that give him additional time and mental energy, he's been able to reinvest some of that additional time and mental energy in to the acquisition of even more useful habits.  Matt doesn't think he's especially smart or high-willpower relative to the average person in the Less Wrong community, and credits this compounding for the reputation he's acquired for being a badass.

[Link] Feynman lectures on physics

10 Mark_Friedenbach 23 August 2014 08:14PM

The Feynman lectures on physics are now available to read online for free. This is a classic resource for not just learning physics also but also the process of science and the mindset of a scientific rationalist.

Conservation of Expected Jury Probability

9 jkaufman 22 August 2014 03:25PM

The New York Times has a calculator to explain how getting on a jury works. They have a slider at the top indicating how likely each of the two lawyers think you are to side with them, and as you answer questions it moves around. For example, if you select that your occupation is "blue collar" then it says "more likely to side with plaintiff" while "white collar" gives "more likely to side with defendant". As you give it more information the pointer labeled "you" slides back and forth, representing the lawyers' ongoing revision of their estimates of you. Let's see what this looks like.

Initial
Selecting "Over 30"
Selecting "Under 30"

For several other questions, however, the options aren't matched. If your household income is under $50k then it will give you "more likely to side with plaintiff" while if it's over $50k then it will say "no effect on either lawyer". This is not how conservation of expected evidence works: if learning something pushes you in one direction, then learning its opposite has to push you in the other.

Let's try this with some numbers. Say people's leanings are:

income probability of siding with plaintiff probability of siding with defendant
>$50k 50% 50%
<$50k 70% 30%
Before asking you your income the lawyers' best guess is you're equally likely to be earning >$50k as <$50k because $50k's the median [1]. This means they'd guess you're 60% likely to side with the plaintiff: half the people in your position earn over >$50k and will be approximately evenly split while the other half of people who could be in your position earn under <$50k and would favor the plaintiff 70-30, and averaging these two cases gives us 60%.

So the lawyers best guess for you is that you're at 60%, and then they ask the question. If you say ">$50k" then they update their estimate for you down to 50%, if you say "<$50k" they update it up to 70%. "No effect on either lawyer" can't be an option here unless the question gives no information.


[1] Almost; the median income in the US in 2012 was $51k. (pdf)

Proposal: Use logical depth relative to human history as objective function for superintelligence

8 sbenthall 14 September 2014 08:00PM

I attended Nick Bostrom's talk at UC Berkeley last Friday and got intrigued by these problems again. I wanted to pitch an idea here, with the question: Have any of you seen work along these lines before? Can you recommend any papers or posts? Are you interested in collaborating on this angle in further depth?

The problem I'm thinking about (surely naively, relative to y'all) is: What would you want to program an omnipotent machine to optimize?

For the sake of avoiding some baggage, I'm not going to assume this machine is "superintelligent" or an AGI. Rather, I'm going to call it a supercontroller, just something omnipotently effective at optimizing some function of what it perceives in its environment.

As has been noted in other arguments, a supercontroller that optimizes the number of paperclips in the universe would be a disaster. Maybe any supercontroller that was insensitive to human values would be a disaster. What constitutes a disaster? An end of human history. If we're all killed and our memories wiped out to make more efficient paperclip-making machines, then it's as if we never existed. That is existential risk.

The challenge is: how can one formulate an abstract objective function that would preserve human history and its evolving continuity?

I'd like to propose an answer that depends on the notion of logical depth as proposed by C.H. Bennett and outlined in section 7.7 of Li and Vitanyi's An Introduction to Kolmogorov Complexity and Its Applications which I'm sure many of you have handy. Logical depth is a super fascinating complexity measure that Li and Vitanyi summarize thusly:

Logical depth is the necessary number of steps in the deductive or causal path connecting an object with its plausible origin. Formally, it is the time required by a universal computer to compute the object from its compressed original description.

The mathematics is fascinating and better read in the original Bennett paper than here. Suffice it presently to summarize some of its interesting properties, for the sake of intuition.

  • "Plausible origins" here are incompressible, i.e. algorithmically random.
  • As a first pass, the depth D(x) of a string x is the least amount of time it takes to output the string from an incompressible program.
  • There's a free parameter that has to do with precision that I won't get into here. 
  • Both a string of length n that is comprised entirely of 1's, and a string of length n of independent random bits are both shallow. The first is shallow because it can be produced by a constant-sized program in time n. The second is shallow because there exists an incompressible program that is the output string plus a constant sized print function that produces the output in time n.
  • An example of a deeper string is the string of length n that for each digit i encodes the answer to the ith enumerated satisfiability problem. Very deep strings can involve diagonalization.
  • Like Kolmogorov complexity, there is an absolute and a relative version. Let D(x/w) be the least time it takes to output x from a program that is incompressible relative to w,
That's logical depth. Here is the conceptual leap to history-preserving objective functions. Suppose you have a digital representation of all of human society at some time step t, calling this ht. And suppose you have some representation of the future state of the universe u that you want to build an objective function around. What's important, I posit, is the preservation of the logical depth of human history in its computational continuation in the future.

We have a tension between two values. First, we want there to be an interesting, evolving future. We would perhaps like to optimize D(u).

However, we want that future to be our future. If the supercontroller maximizes logical depth by chopping all the humans up and turning them into better computers and erasing everything we've accomplished as a species, that would be sad. However, if the supercontroller takes human history as an input and then expands on it, that's much better. D(u/ht) is the logical depth of the universe as computed by a machine that takes human history at time slice t as input.

Working on intuitions here--and your mileage may vary, so bear with me--I think we are interested in deep futures and especially those futures that are deep with respect to human progress so far. As a conjecture, I submit that those will be futures most shaped by human will.

So, here's my proposed objective for the supercontroller, as a function of the state of the universe. The objective is to maximize:

f(u) = D(u/ht) / D(u)

I've been rather fast and loose here and expect there to be serious problems with this formulation. I invite your feedback! I'd like to conclude by noting some properties of this function:
  • It can be updated with observed progress in human history at time t' by replacing ht with ht'. You could imagine generalizing this to something that dynamically updated in real time.
  • This is a quite conservative function, in that it severely punishes computation that does not depend on human history for its input. It is so conservative that it might result in, just to throw it out there, unnecessary militancy against extra-terrestrial life.
  • There are lots of devils in the details. The precision parameter I glossed over. The problem of representing human history and the state of the universe. The incomputability of logical depth (of course it's incomputable!). My purpose here is to contribute to the formal framework for modeling these kinds of problems. The difficult work, like in most machine learning problems, becomes feature representation, sensing, and efficient convergence on the objective.
Thank you for your interest.

Sebastian Benthall
PhD Candidate
UC Berkeley School of Information

 

Meetup Report Thread: September 2014

8 Viliam_Bur 30 August 2014 12:32PM

If you had an interesting Less Wrong meetup recently, but don't have the time to write up a big report to post to Discussion, feel free to write a comment here.  Even if it's just a couple lines about what you did and how people felt about it, it might encourage some people to attend meetups or start meetups in their area.

If you have the time, you can also describe what types of exercises you did, what worked and what didn't.  This could help inspire meetups to try new things and improve themselves in various ways.

If you're inspired by what's posted below and want to organize a meetup, check out this page for some resources to get started!  You can also check FrankAdamek's weekly post on meetups for the week.

Previous Meetup Report Thread: February 2014

 

Guidelines:  Please post the meetup reports as top-level comments, and debate the specific meetup below its comment.  Anything else goes under the "Meta" top-level comment.  The title of this thread should be interpreted as "up to and including September 2014", which means feel free to post reports of meetups that happened in August, July, June, etc.

Polling Thread

8 Gunnar_Zarncke 20 August 2014 02:36PM

The next installment of the Polling Thread.

This is your chance to ask your multiple choice question you always wanted to throw in. Get qualified numeric feedback to your comments. Post fun polls.

These are the rules:

  1. Each poll goes into its own top level comment and may be commented there.
  2. You must at least vote all polls that were posted earlier than you own. This ensures participation in all polls and also limits the total number of polls. You may of course vote without posting a poll.
  3. Your poll should include a 'don't know' option (to avoid conflict with 2). I don't know whether we need to add a troll catch option here but we will see.

If you don't know how to make a poll in a comment look at the Poll Markup Help.


This is a somewhat regular thread. If it is successful I may post again. Or you may. In that case do the following :

  • Use "Polling Thread" in the title.
  • Copy the rules.
  • Add the tag "poll".
  • Link to this Thread or a previous Thread.
  • Create a top-level comment saying 'Discussion of this thread goes here; all other top-level comments should be polls or similar'
  • Add a second top-level comment with an initial poll to start participation.

Should EA's be Superrational cooperators?

7 diegocaleiro 16 September 2014 09:41PM

Back in 2012 when visiting Leverage Research, I was amazed by the level of cooperation in daily situations I got from Mark. Mark wasn't just nice, or kind, or generous. Mark seemed to be playing a different game than everyone else.

If someone needed X, and Mark had X, he would provide X to them. This was true for lending, but also for giving away.

If there was a situation in which someone needed to direct attention to a particular topic, Mark would do it.

You get the picture. Faced with prisoner dilemmas, Mark would cooperate. Faced with tragedy of the commons, Mark would cooperate. Faced with non-egalitarian distributions of resources, time or luck (which are convoluted forms of the dictator game), Mark would rearrange resources without any indexical evaluation. The action would be the same, and the consequentialist one, regardless of which side of a dispute was the Mark side.

I never got over that impression. The impression that I could try to be as cooperative as my idealized fiction of Mark was.

In game theoretic terms, Mark was a Cooperational agent.

  1. Altruistic - MaxOther
  2. Cooperational - MaxSum
  3. Individualist - MaxOwn
  4. Equalitarian - MinDiff
  5. Competitive - MaxDiff
  6. Aggressive - MinOther

Under these definitions of kinds of agents used in research on game theoretical scenarios, what we call Effective Altruism would be called Effective Cooperation. The reason why we call it "altruism" is because even the most parochial EA's care about a set containing a minimum of 7 billion minds, where to a first approximation MaxSum ≈ MaxOther.

Locally however the distinction makes sense. In biology Altruism usually refers to a third concept, different from both the "A" in EA, and Alt, it means acting in such a way that Other>Own without reference to maximizing or minimizing, since evolution designs adaptation executors, not maximizers.

A globally Cooperational agent acts as a consequentialist globally. So does an Alt agent.

The question then is,

How should a consequentialist act locally?

The mathematical response is obviously as a Coo. What real people do is a mix of Coo and Ind.

My suggestion is that we use our undesirable yet unavoidable moral tribe distinction instinct, the one that separates Us from Them, and act always as Coos with Effective Altruists and mix Coo and Ind only with non EAs. That is what Mark did.

 

View more: Next