Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, see the announcement post. For the schedule of future topics, see MIRI's reading guide.
Welcome to the Superintelligence reading group. This week we discuss the first section in the reading guide, Past developments and present capabilities. This section considers the behavior of the economy over very long time scales, and the recent history of artificial intelligence (henceforth, 'AI'). These two areas are excellent background if you want to think about large economic transitions caused by AI.
This post summarizes the section, and offers a few relevant notes, thoughts, and ideas for further investigation. My own thoughts and questions for discussion are in the comments.
There is no need to proceed in order through this post. Feel free to jump straight to the discussion. Where applicable, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).
Reading: Foreword, and Growth modes through State of the art from Chapter 1 (p1-18)
- Economic growth has become radically faster over the course of human history. (p1-2)
- This growth has been uneven rather than continuous, perhaps corresponding to the farming and industrial revolutions. (p1-2)
- Thus history suggests large changes in the growth rate of the economy are plausible. (p2)
- This makes it more plausible that human-level AI will arrive and produce unprecedented levels of economic productivity.
- Predictions of much faster growth rates might also suggest the arrival of machine intelligence, because it is hard to imagine humans - slow as they are - sustaining such a rapidly growing economy. (p2-3)
- Thus economic history suggests that rapid growth caused by AI is more plausible than you might otherwise think.
The history of AI:
- Human-level AI has been predicted since the 1940s. (p3-4)
- Early predictions were often optimistic about when human-level AI would come, but rarely considered whether it would pose a risk. (p4-5)
- AI research has been through several cycles of relative popularity and unpopularity. (p5-11)
- By around the 1990s, 'Good Old-Fashioned Artificial Intelligence' (GOFAI) techniques based on symbol manipulation gave way to new methods such as artificial neural networks and genetic algorithms. These are widely considered more promising, in part because they are less brittle and can learn from experience more usefully. Researchers have also lately developed a better understanding of the underlying mathematical relationships between various modern approaches. (p5-11)
- AI is very good at playing board games. (12-13)
- AI is used in many applications today (e.g. hearing aids, route-finders, recommender systems, medical decision support systems, machine translation, face recognition, scheduling, the financial market). (p14-16)
- In general, tasks we thought were intellectually demanding (e.g. board games) have turned out to be easy to do with AI, while tasks which seem easy to us (e.g. identifying objects) have turned out to be hard. (p14)
- An 'optimality notion' is the combination of a rule for learning, and a rule for making decisions. Bostrom describes one of these: a kind of ideal Bayesian agent. This is impossible to actually make, but provides a useful measure for judging imperfect agents against. (p10-11)
Notes on a few things
- What is 'superintelligence'? (p22 spoiler)
In case you are too curious about what the topic of this book is to wait until week 3, a 'superintelligence' will soon be described as 'any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest'. Vagueness in this definition will be cleared up later.
- What is 'AI'?
In particular, how does 'AI' differ from other computer software? The line is blurry, but basically AI research seeks to replicate the useful 'cognitive' functions of human brains ('cognitive' is perhaps unclear, but for instance it doesn't have to be squishy or prevent your head from imploding). Sometimes AI research tries to copy the methods used by human brains. Other times it tries to carry out the same broad functions as a human brain, perhaps better than a human brain. Russell and Norvig (p2) divide prevailing definitions of AI into four categories: 'thinking humanly', 'thinking rationally', 'acting humanly' and 'acting rationally'. For our purposes however, the distinction is probably not too important.
- What is 'human-level' AI?
We are going to talk about 'human-level' AI a lot, so it would be good to be clear on what that is. Unfortunately the term is used in various ways, and often ambiguously. So we probably can't be that clear on it, but let us at least be clear on how the term is unclear.
One big ambiguity is whether you are talking about a machine that can carry out tasks as well as a human at any price, or a machine that can carry out tasks as well as a human at the price of a human. These are quite different, especially in their immediate social implications.
Other ambiguities arise in how 'levels' are measured. If AI systems were to replace almost all humans in the economy, but only because they are so much cheaper - though they often do a lower quality job - are they human level? What exactly does the AI need to be human-level at? Anything you can be paid for? Anything a human is good for? Just mental tasks? Even mental tasks like daydreaming? Which or how many humans does the AI need to be the same level as? Note that in a sense most humans have been replaced in their jobs before (almost everyone used to work in farming), so if you use that metric for human-level AI, it was reached long ago, and perhaps farm machinery is human-level AI. This is probably not what we want to point at.
Another thing to be aware of is the diversity of mental skills. If by 'human-level' we mean a machine that is at least as good as a human at each of these skills, then in practice the first 'human-level' machine will be much better than a human on many of those skills. It may not seem 'human-level' so much as 'very super-human'.
We could instead think of human-level as closer to 'competitive with a human' - where the machine has some super-human talents and lacks some skills humans have. This is not usually used, I think because it is hard to define in a meaningful way. There are already machines for which a company is willing to pay more than a human: in this sense a microscope might be 'super-human'. There is no reason for a machine which is equal in value to a human to have the traits we are interested in talking about here, such as agency, superior cognitive abilities or the tendency to drive humans out of work and shape the future. Thus we talk about AI which is at least as good as a human, but you should beware that the predictions made about such an entity may apply before the entity is technically 'human-level'.
Example of how the first 'human-level' AI may surpass humans in many ways.
Because of these ambiguities, AI researchers are sometimes hesitant to use the term. e.g. in these interviews.
- Growth modes (p1)
Robin Hanson wrote the seminal paper on this issue. Here's a figure from it, showing the step changes in growth rates. Note that both axes are logarithmic. Note also that the changes between modes don't happen overnight. According to Robin's model, we are still transitioning into the industrial era (p10 in his paper).
- What causes these transitions between growth modes? (p1-2)
One might be happier making predictions about future growth mode changes if one had a unifying explanation for the previous changes. As far as I know, we have no good idea of what was so special about those two periods. There are many suggested causes of the industrial revolution, but nothing uncontroversially stands out as 'twice in history' level of special. You might think the small number of datapoints would make this puzzle too hard. Remember however that there are quite a lot of negative datapoints - you need an explanation that didn't happen at all of the other times in history.
- Growth of growth
It is also interesting to compare world economic growth to the total size of the world economy. For the last few thousand years, the economy seems to have grown faster more or less in proportion to it's size (see figure below). Extrapolating such a trend would lead to an infinite economy in finite time. In fact for the thousand years until 1950 such extrapolation would place an infinite economy in the late 20th Century! The time since 1950 has been strange apparently.
(Figure from here)
- Early AI programs mentioned in the book (p5-6)
You can see them in action: SHRDLU, Shakey, General Problem Solver (not quite in action), ELIZA.
- Later AI programs mentioned in the book (p6)
Algorithmically generated Beethoven, algorithmic generation of patentable inventions, artificial comedy (requires download).
- Modern AI algorithms mentioned (p7-8, 14-15)
Here is a neural network doing image recognition. Here is artificial evolution of jumping and of toy cars. Here is a face detection demo that can tell you your attractiveness (apparently not reliably), happiness, age, gender, and which celebrity it mistakes you for.
- What is maximum likelihood estimation? (p9)
Bostrom points out that many types of artificial neural network can be viewed as classifiers that perform 'maximum likelihood estimation'. If you haven't come across this term before, the idea is to find the situation that would make your observations most probable. For instance, suppose a person writes to you and tells you that you have won a car. The situation that would have made this scenario most probable is the one where you have won a car, since in that case you are almost guaranteed to be told about it. Note that this doesn't imply that you should think you won a car, if someone tells you that. Being the target of a spam email might only give you a low probability of being told that you have won a car (a spam email may instead advise you of products, or tell you that you have won a boat), but spam emails are so much more common than actually winning cars that most of the time if you get such an email, you will not have won a car. If you would like a better intuition for maximum likelihood estimation, Wolfram Alpha has several demonstrations (requires free download).
- What are hill climbing algorithms like? (p9)
The second large class of algorithms Bostrom mentions are hill climbing algorithms. The idea here is fairly straightforward, but if you would like a better basic intuition for what hill climbing looks like, Wolfram Alpha has a demonstration to play with (requires free download).
If you are particularly interested in these topics, and want to do further research, these are a few plausible directions:
- How have investments into AI changed over time? Here's a start, estimating the size of the field.
- What does progress in AI look like in more detail? What can we infer from it? I wrote about algorithmic improvement curves before. If you are interested in plausible next steps here, ask me.
- What do economic models tell us about the consequences of human-level AI? Here is some such thinking; Eliezer Yudkowsky has written at length about his request for more.
How to proceed
This has been a collection of notes on the chapter. The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!
Next week, we will talk about what AI researchers think about human-level AI: when it will arrive, what it will be like, and what the consequences will be. To prepare, read Opinions about the future of machine intelligence from Chapter 1 and also When Will AI Be Created? by Luke Muehlhauser. The discussion will go live at 6pm Pacific time next Monday 22 September. Sign up to be notified here.
Unfamiliar or unpopular ideas will tend to reach you via proponents who:
- ...hold extreme interpretations of these ideas.
- ...have unpleasant social characteristics.
- ...generally come across as cranks.
The basic idea: It's unpleasant to promote ideas that result in social sanction, and frustrating when your ideas are met with indifference. Both situations are more likely when talking to an ideological out-group. Given a range of positions on an in-group belief, who will decide to promote the belief to outsiders? On average, it will be those who believe the benefits of the idea are large relative to in-group opinion (extremists), those who view the social costs as small (disagreeable people), and those who are dispositionally drawn to promoting weird ideas (cranks).
I don't want to push this pattern too far. This isn't a refutation of any particular idea. There are reasonable people in the world, and some of them even express their opinions in public, (in spite of being reasonable). And sometimes the truth will be unavoidably unfamiliar and unpopular, etc. But there are also...
Some benefits that stem from recognizing these selection effects:
- It's easier to be charitable to controversial ideas, when you recognize that you're interacting with people who are terribly suited to persuade you. I'm not sure "steelmanning" is the best idea (trying to present the best argument for an opponent's position). Based on the extremity effect, another technique is to construct a much diluted version of the belief, and then try to steelman the diluted belief.
- If your group holds fringe or unpopular ideas, you can avoid these patterns when you want to influence outsiders.
- If you want to learn about an afflicted issue, you might ignore the public representatives and speak to the non-evangelical instead (you'll probably have to start the conversation).
- You can resist certain polarizing situations, in which the most visible camps hold extreme and opposing views. This situation worsens when those with non-extreme views judge the risk of participation as excessive, and leave the debate to the extremists (who are willing to take substantial risks for their beliefs). This leads to the perception that the current camps represent the only valid positions, which creates a polarizing loop. Because this is a sort of coordination failure among non-extremists, knowing to covertly look for other non-vocal moderates is a first step toward a solution. (Note: Sometimes there really aren't any moderates.)
- Related to the previous point: You can avoid exaggerating the ideological unity of a group based on the group's leadership, or believing that the entire group has some obnoxious trait present in the leadership. (Note: In things like elections and war, the views of the leadership are what you care about. But you still don't want to be confused about other group members.)
I think the first benefit listed is the most useful.
To sum up: An unpopular idea will tend to get poor representation for social reasons, which will makes it seem like a worse idea than it really is, even granting that many unpopular ideas are unpopular for good reason. So when you encounter a idea that seem unpopular, you're probably hearing about it from a sub-optimal source, and you should try to be charitable towards the idea before dismissing it.
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
This is a thread to connect rationalists who are learning the same thing, so they can cooperate.
The "learning" doesn't necessarily mean "I am reading a textbook / learning an online course right now". It can be something you are interested in long-term, and still want to learn more.
Top-level comments contain only the topic to learn. (Plus one comment for "meta" debate.) Only one topic per comment, for easier search. Try to find a reasonable level of specificity: too narrow topic means less people; too wide topic means more people who actually are interested in something different than you are.
Use the second-level comments if you are learning that topic. (Or if you are going to learn it now, not merely in the far future.) Technically, "me too" is okay in this thread, but providing more info is probably more useful. For example: What are you focusing on? What learning materials you use? What is your goal?
Third- and deeper-level comments, that's debate as usual.
As per a recent comment this thread is meant to voice contrarian opinions, that is anything this community tends not to agree with. Thus I ask you to post your contrarian views and upvote anything you do not agree with based on personal beliefs. Spam and trolling still needs to be downvoted.
Hello Less Wrong, I don't post here much but I've been involved in the Bay Area Less Wrong community for several years, where many of you know me from. The following is a white paper I wrote earlier this year for my firm, RHS Financial, a San Francisco based private wealth management practice. A few months ago I presented it at a South Bay Less Wrong meetup. Since then many of you have encouraged me to post it here for the rest of the community to see. The original can be found here, please refer to the disclosures, especially if you are the SEC. I have added an afterword here beneath the citations to address some criticisms I have encountered since writing it. As a company white paper intended for a general audience, please forgive me if the following is a little too self-promoting or spends too much time on grounds already well-tread here, but I think many of you will find it of value. Hope you enjoy!
Executive Summary: Capital markets have created enormous amounts of wealth for the world and reward disciplined, long-term investors for their contribution to the productive capacity of the economy. Most individuals would do well to invest most of their wealth in the capital market assets, particularly equities. Most investors, however, consistently make poor investment decisions as a result of a poor theoretical understanding of financial markets as well as cognitive and emotional biases, leading to inferior investment returns and inefficient allocation of capital. Using an empirically rigorous approach, a rational investor may reasonably expect to exploit inefficiencies in the market and earn excess returns in so doing.
Most people understand that they need to save money for their future, and surveys consistently find a large majority of Americans expressing a desire to save and invest more than they currently are. Yet the savings rate and percentage of people who report owning stocks has trended down in recent years,1 despite the increasing ease with which individuals can participate in financial markets, thanks to the spread of discount brokers and employer 401(k) plans. Part of the reason for this is likely the unrealistically pessimistic expectations of would-be investors. According to a recent poll barely one third of Americans consider equities to be a good way to build wealth over time.2 The verdict of history, however, is against the skeptics.
The Greatest Deal of all Time
Equity ownership is probably the easiest, most powerful means of accumulating wealth over time, and people regularly forego millions of dollars over the course of their lifetimes letting their wealth sit in cash. Since its inception in 1926, the annualized total return on the S&P 500 has been 9.8% as of the end of 2012.3 $1 invested back then would be worth $3,533 by the end of the period. More saliently, a 25 year old investor investing $5,000 per year at that rate would have about $2.1 million upon retirement at 65.
The strong performance of stock markets is robust to different times and places. Though the most accurate data on the US stock market goes back to 1926, financial historians have gathered information going back to 1802 and find the average annualized real return in earlier periods is remarkably close to the more recent official records. Looking at rolling 30 year returns between 1802 and 2006, the lowest and highest annualized real returns have been 2.6% and 10.6%, respectively.4 The United States is not unique in its experience, either. In a massive study of the sixteen countries that had data on local stock, bond, and cash returns available for every year of the twentieth century, the stock market in every one had significant, positive real returns that exceeded those of cash and fixed income alternatives.5 The historical returns of US stocks only slightly exceed those of the global average.
The opportunity cost of not holding stocks is enormous. Historically the interest earned on cash equivalent investments like savings accounts has barely kept up with inflation - over the same since-1926 period inflation has averaged 3.0% while the return on 30-day treasury bills (a good proxy for bank savings rates) has been 3.5%.6 That 3.5% rate would only earn an investor $422k over the same $5k/year scenario above. The situation today is even worse. Most banks are currently paying about 0.05% on savings.
Similarly, investment grade bonds, such as those issued by the US Treasury and highly rated corporations, though often an important component of a diversified portfolio, have offered returns only modestly better than cash over the long run. The average return on 10-year treasury bonds has been 5.1%,7 earning an investor $619k over the same 40 year scenario. The yield on the 10-year treasury is currently about 3%.
Homeownership has long been a part of the American dream, and many have been taught that building equity in your home is the safest and most prudent way to save for the future. The fact of the matter, however, is that residential housing is more of a consumption good than an investment. Over the last century the value of houses have barely kept up with inflation,8 and as the recent mortgage crisis demonstrated, home prices can crash just as any other market.
In virtually every time and place we look, equities are the best performing asset available, a fact which is consistent with the economic theory that risky assets must offer a premium to their investors to compensate them for the additional uncertainty they bear. What has puzzled economists for decades is why the so-called equity risk premium is so large and why so many individuals invest so little in stocks.9
Your Own Worst Enemy
Recent insights from multidisciplinary approaches in cognitive science have shed light on the issue, demonstrating that instead of rationally optimizing between various trade-offs, human beings regularly rely on heuristics - mental shortcuts that require little cognitive effort - when making decisions.10 These heuristics lead to taking biased approaches to problems that deviate from optimal decision making in systematic and predictable ways. Such biases affect financial decisions in a large number of ways, one of the most profound and pervasive being the tendency of myopic loss aversion.
Myopic loss aversion refers to the combined result of two observed regularities in the way people think: that losses feel bad to a greater extent than equivalent gains feel good, and that people rely too heavily (anchor) on recent and readily available information. 11Taken together, it is easy to see how these mental errors could bias an individual against holding stocks. Though the historical and expected return on equities greatly exceeds those of bonds and cash, over short time horizons they can suffer significant losses. And while the loss of one’s home equity is generally a nebulous abstraction that may not manifest itself consciously for years, stock market losses are highly visible, drawing attention to themselves in brokerage statements and newspaper headlines. Not surprisingly, then, an all too common pattern among investors is to start investing at a time when the headlines are replete with stories of the riches being made in markets, only to suffer a pullback and quickly sell out at ten, twenty, thirty plus percent losses and sit on cash for years until the next bull market is again near its peak in a vicious circle of capital destruction. Indeed, in the 20 year period ending 2012, the S&P 500 returned 8.2% and investment grade bonds returned 6.3% annualized. The inflation rate was 2.5%, and the average retail investor earned an annualized rate of 2.3%.12
Even when investors can overcome their myopic loss aversion and stay in the stock market for the long haul, investment success is far from assured. The methods by which investors choose which stocks or stock managers to buy, hold, and sell are also subject to a host of biases which consistently lead to suboptimal investing and performance. Chief among these is overconfidence, the belief that one’s judgements and skills are reliably superior.
Overconfidence is endemic to the human experience. The vast majority of people think of themselves as more intelligent, attractive, and competent than most of their peers,13 even in the face of proof to the contrary. 93% of people consider themselves to be above-average drivers,14 for example, and that percentage decreases only slightly if you ask people to evaluate their driving skill after being admitted to a hospital following a traffic accident.15 Similarly, most investors are confident they can consistently beat the market. One survey found 74% of mutual fund investors believed the funds they held would “consistently beat the S&P 500 every year” in spite of the statistical reality that more than half of US stock funds underperform in a given year and virtually none will outperform it each and every year. Many investors will even report having beaten the index despite having verifiably underperformed it by several percentage points.16
Overconfidence leads investors to take outsized bets on what they know and are familiar with. Investors around the world commonly hold 80% or more of their portfolios in investments from their own country,17 and one third of 401(k) assets are invested in participants’ own employer’s stock.18 Such concentrated portfolios are demonstrably riskier than a broadly diversified portfolio, yet investors regularly evaluate their investments as less risky than the general market, even if their securities had recently lost significantly more than the overall market.
If an investor believes himself to possess superior talent in selecting investments, he is likely to trade more as a result in an attempt to capitalize on each new opportunity that presents itself. In this endeavor, the harder investors try, the worse they do. In one major study, the quintile of investors who traded the most over a five year period earned an average annualized 7.1 percentage points less than the quintile that traded the least.19
The Folly of Wall Street
Relying on experts does little to help. Wall Street employs an army of analysts to follow the every move of all the major companies traded on the market, predicting their earnings and their expected performance relative to peers, but on the whole they are about as effective as a strategy of throwing darts. Burton Malkiel explains in his book A Random Walk Down Wall Street how he tracked the one and five year earnings forecasts on companies in the S&P 500 from analysts at 19 Wall Street firms and found that in aggregate the estimates had no more predictive power than if you had just assumed a given company’s earnings would grow at the same rate as the long-term average rate of growth in the economy. This is consistent with a much broader body of literature demonstrating that the predictions of statistical prediction rules - formulas that make predictions based on simple statistical rules - reliably outperform those of human experts. Statistical prediction rules have been used to predict the auction price of bordeaux better than expert wine tasters,20 marital happiness better than marriage counselors,21 academic performance better than admissions officers,22 criminal recidivism better than criminologists,23 and bankruptcy better than loan officers,24 to name just a few examples. This is an incredible finding that’s difficult to overstate. When considering complex issues such as these our natural intuition is to trust experts who can carefully weigh all the relevant information in determining the best course of action. But in reality experts are simply humans who have had more time to reinforce their preconceived notions on a particular topic and are more likely to anchor their attention on items that only introduce statistical noise.
Back in the world of finance, It turns out that to a first approximation the best estimate on the return to expect from a given stock is the long-run historical average of the stock market, and the best estimate of the return to expect from a stock picking mutual fund is the long-run historical average of the stock market minus its fees. The active stock pickers who manage mutual funds have on the whole demonstrated little ability to outperform the market. To be sure, at any given time there are plenty of managers who have recently beaten the market smartly, and if you look around you will even find a few with records that have been terrific over ten years or more. But just as a coin-flipping contest between thousands of contestants would no doubt yield a few who had uncannily “called it” a dozen or more times in a row, the number of market beating mutual fund managers is no greater than what you should expect as a result of pure luck.25
Expert and amatuer investors alike underestimate how competitive the capital markets are. News is readily available and quickly acted upon, and any fact you know about that you think gives you an edge is probably already a value in the cells of thousands of spreadsheets of analysts trading billions of dollars. Professor of Finance at Yale and Nobel Laureate Robert Shiller makes this point in a lecture using an example of a hypothetical drug company that announces it has received FDA approval to market a new drug:
Suppose you then, the next day, read in The Wall Street Journal about this new announcement. Do you think you have any chance of beating the market by trading on it? I mean, you're like twenty-four hours late, but I hear people tell me — I hear, "I read in Business Week that there was a new announcement, so I'm thinking of buying." I say, "Well, Business Week — that information is probably a week old." Even other people will talk about trading on information that's years old, so you kind of think that maybe these people are naïve. First of all, you're not a drug company expert or whatever it is that's needed. Secondly, you don't know the math — you don't know how to calculate present values, probably. Thirdly, you're a month late. You get the impression that a lot of people shouldn't be trying to beat the market. You might say, to a first approximation, the market has it all right so don't even try.26
In that last sentence Shiller hints at one of the most profound and powerful ideas in finance: the efficient market hypothesis. The core of the efficient market hypothesis is that when news that impacts the value of a company is released, stock prices will adjust instantly to account for the new information and bring it back to equilibrium where it’s no longer a “good” or “bad” investment but simply a fair one for its risk level. Because news is unpredictable by definition, it is impossible then to reliably outperform the market as a whole, and the seemingly ingenious investors on the latest cover of Forbes or Fortune are simply lucky.
A Noble Lie
In the 50s, 60s, and 70s several economists who would go on to win Nobel prizes worked out the implications of the efficient market hypothesis and created a new intellectual framework known as modern portfolio theory.27 The upshot is that capital markets reward investors for taking risk, and the more risk you take, the higher your return should be (in expectation, it might not turn out to be the case, which is why it’s risky). But the market doesn’t reward unnecessary risk, such as taking out a second mortgage to invest in your friend’s hot dog stand. It only rewards systematic risk, the risks associated with being exposed to the vagaries of the entire economy, such as interest rates, inflation, and productivity growth.28 Stock of small companies are riskier and have a higher expected return than stocks of large companies, which are riskier than corporate bonds, which are riskier than Treasury bonds. But owning one small cap stock doesn’t offer a higher expected return than another small cap stock, or a portfolio of hundreds of small caps for that matter. Owning more of a particular stock merely exposes you to the idiosyncratic risks that particular company faces and for which you are not compensated. Diversifying assets across as many securities as possible, it is possible to reduce the volatility of your portfolio without lowering its expected return.
This approach to investing dictates that you should determine an acceptable level of risk for your portfolio, then buy the largest basket of securities possible that targets that risk, ideally while paying the least amount possible in fees. Academic activism in favor of this passive approach gained momentum through the 70s, culminating in the launch of the first commercially available index fund in 1976, offered by The Vanguard Group. The typical index fund seeks to replicate the overall market performance of a broad class of investments such as large US stocks by owning all the securities in that market in proportion to their market weights. Thus if XYZ stock makes up 2% of the value of the relevant asset class, the index fund will allocate 2% of its funds to that stock. Because index funds only seek to replicate the market instead of beating it, they save costs on research and management teams and pass the savings along to investors through lower fees.
Index funds were originally derided and attracted little investment, but years of passionate advocacy by popularizers such as Jack Bogle and Burton Malkiel as well as the consensus of the economics profession has helped to lift them into the mainstream. Index funds now command trillions of dollars of assets and cover every segment of the market in stocks, bonds, and alternative assets in the US and abroad. In 2003 Vanguard launched its target retirement funds, which took the logic of passive investing even further by providing a single fund that would automatically shift from more aggressive to more conservative index investments as its investors approached retirement. Target retirement funds have since become especially popular options in 401(k) plans.
The rise of index investing has been a boon to individual investors, who have clearly benefited from the lower fees and greater diversification they offer. To the extent that investors have bought into the idea of passive investing over market timing and active security selection they have collectively saved themselves a fortune by not giving in to their value-destroying biases. For all the good index funds have done though, since their birth in the 70s, the intellectual foundation upon which they stand, the efficient market hypothesis, has been all but disproved.
The EMH is now the noble lie of the economics profession; while economists usually teach their students and the public that the capital markets are efficient and unbeatable, their research over the last few decades has shown otherwise. In a telling example, Paul Samuelson, who helped originate the EMH and advocated it in his best selling textbook, was a large, early investor in Berkshire Hathaway, Warren Buffett’s active investment holding company.29 But real people regularly ruin their lives through sloppy investing, and for them perhaps it is better just to say that beating the market can’t be done, so just buy, hold, and forget about it. We, on the other hand, believe a more nuanced understanding of the facts can be helpful.
Shortly after the efficient market hypothesis was first put forth researchers realized the idea had serious theoretical shortcomings.30 Beginning as early as 1977 they also found empirical “anomalies,” factors other than systematic risk that seemed to predict returns.31 Most of the early findings focused on valuation ratios - measures of a firm’s market price in relation to an accounting measure such as book value or earnings - and found that “cheap” stocks on average outperformed “expensive” stocks, confirming the value investment philosophy first promulgated by the legendary Depression-era investor Benjamin Graham and popularized by his most famous student, Warren Buffett. In 1992 Eugene Fama, one of the fathers of the efficient market hypothesis, published, along with Ken French, a groundbreaking paper demonstrating that the cheapest decile stocks in the US, as measured by the price to book ratio, outperformed the highest decile stocks by an astounding 11.9% per year, despite there being little difference in risk between them.32
A year later, researchers found convincing evidence of a momentum anomaly in US stocks: stocks that had the highest performance over the last 3-12 months continued to outperform relative to those with the lowest performance. The effect size was comparable to that of the value anomaly and again the discrepancy could not be explained with any conventional measure of risk.33
Since then, researchers have replicated the value and momentum effects across larger and deeper datasets, finding comparably large effect sizes in different times, regions, and asset classes. In a highly ambitious 2012 paper, Clifford Asness (a former student of Fama’s) and Tobias Moskowitz documented the significance of value and momentum across 18 national equity markets, 10 currencies, 10 government bonds, and 27 commodity futures.
Though value and momentum are the most pervasive and best documented of the market anomalies, many others have been discovered across the capital markets. Others include the small-cap premium34 (small company stocks tend to outperform large company stocks even in excess of what should be expected by their risk), the liquidity premium35 (less frequently traded securities tend to outperform more frequently traded securities), short-term reversal36 (equities with the lowest one-week to one-month performance tend to outperform over short time horizons), carry37 (high-yielding currencies tend to appreciate against low-yielding currencies), roll yield38,39 (bonds and futures at steeply negatively sloped points along the yield curve tend to outperform those at flatter or positively sloped points), profitability40 (equities of firms with higher proportions of profits over assets or equity tend to outperform those with lower profitability), calendar effects41 (stocks tend to have stronger returns in January and weaker returns on Mondays), and corporate action premia42 (securities of corporations that will, currently are, or have recently engaged in mergers, acquisitions, spin-offs, and other events tend to consistently under or outperform relative to what would be expected by their risk).
Most of these market anomalies appear remarkably robust compared to findings in other social sciences,43 especially considering that they seem to imply trillions of dollars of easy money is being overlooked in plain sight. Intelligent observers often question how such inefficiencies could possibly persist in the face of such strong incentives to exploit them until they disappear. Several explanations have been put forth, some of which are conflicting but which all probably have some explanatory power.
The first interpretation of the anomalies is to deny that they are actually anomalous, but rather are compensation for risk that isn’t captured by the standard asset pricing models. This is the view of Eugene Fama, who first postulated that the value premium was compensation for assuming risk of financial distress and bankruptcy that was not fully captured by simply measuring the standard deviation of a value stock’s returns.44 Subsequent research, however, disproved that the value effect was explained by exposure to financial distress.45 More sophisticated arguments point to the fact that the excess returns of value, momentum, and many other premiums exhibit greater skewness, kurtosis, or other statistical moments than the broad market: subtle statistical indications of greater risk, but the differences hardly seem large enough to justify the large return premiums observed.46
The only sense in which e.g. value and momentum stocks seem genuinely “riskier” is in career risk; though the factor premiums are significant and robust in the long term, they are not consistent or predictable along short time horizons. Reaping their rewards requires patience, and an analyst or portfolio manager who recommends an investment for his clients based on these factors may end up waiting years before it pays off, typically more than enough time to be fired.47 Though any investment strategy is bound to underperform at times, strategies that seek to exploit the factors most predictive of excess returns are especially susceptible to reputational hazard. Value stocks tend to be from unpopular companies in boring, slow growth industries. Momentum stocks are often from unproven companies with uncertain prospects or are from fallen angels who have only recently experienced a turn of luck. Conversely, stocks that score low on value and momentum factors are typically reputable companies with popular products that are growing rapidly and forging new industry standards in their wake.
Consider then, two companies in the same industry: Ol’Timer Industries, which has been around for decades and is consistently profitable but whose product lines are increasingly considered uncool and outdated. Recent attempts to revamp the company’s image by the firm’s new CEO have had modest success but consumers and industry experts expect this to be just delaying further inevitable loss of market share to NuTime.ly, founded eight years ago and posting exponential revenue growth and rapid adoption by the coveted 18-35 year old demographic, who typically describe its products using a wide selection of contemporary idioms and slang indicating superior social status and functionality. Ol’Timer Industries’ stock will likely score highly on value on momentum factors relative to NuTime.ly and so have a higher expected return. But consider the incentives of the investment professional choosing between the two: if he chooses Ol’Timer and it outperforms he may be congratulated and rewarded perhaps slightly more than if he had chosen NuTime.ly and it outperforms, but if he chooses Ol’Timer and it underperforms he is a fool and a laughingstock who wasted clients’ money on his pet theory when “everyone knew” NuTime.ly was going to win. At least if he chooses NuTime.ly and it underperforms it was a fluke that none of his peers saw coming, save for a few wingnuts who keep yammering about the arcane theories of Gene Fama and Benjamin Graham.
For most investors, “it is better for reputation to fail conventionally than to succeed unconventionally” as John Maynard Keynes observed in his General Theory. Not that this is at all restricted to investors, professional or amateur. In a similar vein, professional soccer goalkeepers continue to jump left or right on penalty kicks when statistics show they’d block more shots standing still.48 But standing in place while the ball soars into the upper right corner makes the goalkeeper look incompetent. The proclivity of middle managers and bureaucrats to default to uncontroversial decisions formed by groupthink is familiar enough to be the stuff of popular culture; nobody ever got fired for buying IBM, as the saying goes. Psychological experiments have shown that people will often affirm an obviously false observation about simple facts such as the relative lengths of straight lines on a board if others have affirmed it before them.49
We find ourselves back to the nature of human thinking and the biases and other cognitive errors that afflict it. This is what most interpretations of the market anomalies focuses on. Both amatuer and professional investors are human beings that are apt to make investment decisions not through a methodical application of modern portfolio theory but based rather on stories, anecdotes, hunches, and ideologies. Most of the anomalies make sense in light of an understanding of some of the most common biases such as anchoring and availability bias, status quo bias, and herd behavior.50 Rational investors seeking to exploit these inefficiencies may be able to do so to a limited extent, but if they are using other peoples’ money then they are constrained by the biases of their clients. The more aggressively they attempt to exploit market inefficiencies, the more they risk badly underperforming the market long enough to suffer devastating withdrawals of capital.51
It is no surprise then, that the most successful investors have found ways to rely on “sticky” capital unlikely to slip out of their control at the worst time. Warren Buffett invests the float of his insurance company holdings, which behaves in actuarially predictable ways; David Swensen manages the Yale endowment fund, which has an explicitly indefinite time horizon and a rules based spending rate; Renaissance Technologies, arguably the most successful hedge fund ever, only invests its own money; Dimensional Fund Advisors, one of the only mutual fund companies that has consistently earned excess returns through factor premiums, only sells through independent financial advisors who undergo a due diligence process to ensure they share similar investment philosophies.
Building a Better Portfolio
So what is an investor to do? The prospect of delicately crafting a portfolio that’s adequately diversified while taking advantage of return premiums may seem daunting, and one may be tempted to simply buy a Vanguard target retirement fund appropriate for their age and be done with it. Doing so is certainly a reasonable option. But we believe that with a disciplined investment strategy informed by the findings discussed above superior results are possible.
The first place to start is an assessment of your risk tolerance. How far can your portfolio fall before it adversely affects your quality of life? For investors saving for retirement with many more years of work ahead of them, the answer will likely be “quite a lot.” With ten years or more to work with, your portfolio will likely recover from even the most extreme bear markets. But people do not naturally think in ten-year increments, and many must live off their portfolio principal; accept that in the short term your portfolio will sometimes be in the red and consider what percentage decline over a period of a few months to a year you are comfortable enduring. Over a one year period the “worst case scenario” on diversified stock portfolios is historically about a 40% decline. For a traditional “moderate” portfolio of 60% stocks, 40% bonds it has been about a 25% decline.52
With a target on how much risk to accept in your portfolio, modern portfolio theory shows us a technique for achieving the most efficient tradeoff between risk and return possible called mean-variance optimization. An adequate treatment of MVO is beyond the scope of this paper,53 but essentially the task is to forecast expected returns on the major asset classes (e.g. US Stocks, International Stocks, and Investment Grade Bonds) then compute the weights for each that will achieve the highest expected return for a given amount of risk. We use an approach to mean variance optimization known as the Black-Litterman model54 and estimate expected returns using a limited number of simple inputs; for example, the expected return on an index of stocks can be closely approximated using the current dividend yield plus the long run growth rate of the economy.55
With optimal portfolio weights determined, next the investor must select the investment vehicles to use to gain exposure to the various asset classes. Though traditional index funds are a reasonable option, in recent years several “enhanced index” mutual fund and ETFs have been released that provide inexpensive, broad exposure to the hundreds or thousands of securities in a given asset classes while enhancing exposure to one or more of the major factor premiums discussed above such as value, profitability, or momentum. Research Affiliates, for example, licences a “fundamental index” that has been shown to provide efficient exposure to value and small-cap stocks across many markets.56 These “RAFI” indexes have been licensed to the asset management firms Charles Schwab and PowerShares to be made available through mutual funds and ETFs to the general investing public, and have generally outperformed their traditional index fund counterparts since inception.
Over the course of time, portfolio allocations will drift from their optimized allocations as particular asset classes inevitably outperform relative to other ones. Leaving this unchecked can lead to a portfolio that is no longer risk-return efficient. The investor must periodically rebalance the portfolio by selling securities that have become overweight and buying others that are underweight. Research suggests that by setting “tolerance bands” around target asset allocations, monitoring the portfolio frequently and trading when weights drift outside tolerance, investors can take further advantage of inter-asset-class value and momentum effects and boost return while reducing risk.57
Most investors, however, do not rebalance systematically, perhaps in part because it can be psychologically distressing. Rebalancing necessarily entails regularly selling assets that have been performing well in order to buy ones that have been laggards, exactly when your cognitive biases are most likely to tell you that it’s a bad idea. Indeed, neuroscientists have observed in laboratory experiments that when individuals consider the prospect of buying more of a risky asset that has lost them money, it activates the modules in the brain associated with anticipation of physical pain and anxiety.58 Dealing with investment losses is literally painful for investors.
Many investors may find it helpful to their peace of mind as well as their portfolio to outsource the entire process to a party with less emotional attachment in their portfolio. Realistically, most investors have neither the time nor the motivation necessary to attain a firm understanding of modern portfolio theory, research the capital market expectations on various asset classes and securities, and regularly monitor and rebalance their portfolio, all with enough rigor to make it worth the effort compared to a simple indexing strategy. By utilizing the skills of a good financial advisor, however, an investor can leverage the expertise of a professional with the bandwidth to execute these tactics in a cost-efficient manner.
A financial advisor should be able to engage you as an investor and acquire a firm understanding of your goals, needs, and attitudes towards risk, money, and markets. Because he or she will have an entire practice over which to efficiently dedicate time and resources on portfolio research, optimization, and trading, the financial advisor should be able to craft a portfolio that’s optimized for your personal situation. Financial advisors, as institutional investors, generally have access to institutional class funds that retail investors do not, including many of those that have demonstrated the greatest dedication to exploiting the factor premiums. Notably, DFA and AQR, the two fund families with the greatest academic support, are generally only available to individual investors through a financial advisor. Should your professionally managed portfolio provide a better risk adjusted return than a comparable do-it-yourself index fund approach, the FA’s fees have paid for themselves.
Furthermore, a good financial advisor will make sure your investments are tax efficient and that you are making the most of tax-preferred accounts. Researchers have shown that after asset allocation, asset location, the strategic placement of investments in accounts with different tax treatment, is one of the most important factors in net portfolio returns,59 yet most individual investors largely ignore these effects.60 Advisor’s fees can generally be paid with pre-tax funds as well, further enhancing tax efficiency.
Invest with Purpose
There is something of a paradox involved in investing. Finance is a highly specialized and technical field, but money is a very personal and emotional topic. Achieving the joy and fulfillment associated with financial success requires a large measure of emotional detachment and impersonal pragmatism. Far too often people suffer great loss by confusing loyalties and aspirations, fears and regrets with the efficient allocation of their portfolio assets. We as advisors hate to see this happen; there is nothing to celebrate about the needless destruction of capital, it is truly a loss for us all. One of the greatest misconceptions about finance is that investing is just a zero-sum game, that one trader’s gain is another’s loss. Nothing could be further from the truth. Economists have shown that one of the greatest predictors of a nation’s well being is its financial development.61 The more liquid and active our capital markets, the greater our society’s capacity for innovation and progress. When you invest in the stock market, you are contributing your share to the productive capacity of our world, your return is your reward for helping make it better, outperformance is a sign that you have steered capital to those with the greatest use for it.
With the right accounts and investments in place and a process for managing them effectively, you the investor are freed to focus on what you are working and investing for, and an advisor can work with you to help get you there. Whether you want to travel the world, buy the house of your dreams, send your children to the best college, maximize your philanthropic giving, or simply retire early, an advisor can help you develop a financial plan to turn the dollars and cents of your portfolio into the life you want to live, building more health, wealth, and happiness for you, your loved ones, and the world.
1. “U.S. Stock Ownership Stays at Record Low,” Gallup.
2. “U.S. Investors Not Sold on Stock Market as Wealth Creator,” Gallup.
3. Data provided by Morningstar.
4. Siegel, Stocks for the Long Run, 5-25
5. Dimson et al, Triumph of the Optimists.
6. Ibid. 3
8. Shiller, “Understanding Recent Trends in House Prices and Home Ownership.”
9. Mankiw and Zeldes, for example, find that to justify the historical equity risk premium observed, investors would in aggregate need to be indifferent between a certain payoff of $51,209 and a 50/50 bet paying either $50,000 or $100,000. Mankiw and Zeldes, “The consumption of stockholders and nonstockholders,” 8.
10. For a highly readable introduction to the idea of cognitive biases, see Daniel Kahneman’s book “Thinking: Fast and Slow.” Kahneman has been a pioneer in the field and for his work won the 2002 Nobel prize in economics.
11. Benartzi and Thaler, “Myopic Loss Aversion and the Equity Premium Puzzle.”
12. “Guide to the Markets,” J.P. Morgan Asset Management
13. See, for example, Kruger and Dunning, "Unskilled and Unaware of It: How Difficulties in Recognizing One's Own Incompetence Lead to Inflated Self-Assessments" and Zuckerman and Jost, "What Makes You Think You're So Popular? Self Evaluation Maintenance and the Subjective Side of the ‘Friendship Paradox’"
14. Svenson, “Are We All Less Risky and More Skillful than Our Fellow Drivers?”
15. Preston and Harris, “Psychology of Drivers in Traffic Accidents.”
16. Zweig, Your Money and Your Brain. 88-91.
17. French and Poterba, “Investor Diversification and International Equity Markets.”
18. Ibid. 14. p. 98-99.
19. Barber and Odean, “Trading is Hazardous to Your Wealth: The Common Stock Investment Performance of Individual Investors.”
20. Ashenfelter et al, “Predicting the Quality and Prices of Bordeaux Wine.”
21. Thornton, "Toward a Linear Prediction of Marital Happiness."
22. Swets et al, "Psychological Science Can Improve Diagnostic Decisions."
23. Carroll et al, "Evaluation, Diagnosis, and Prediction in Parole Decision-Making."
24. Stillwell et al, "Evaluating Credit Applications: A Validation of Multiattribute Utility Weight Elicitation Techniques"
25. See Fama and French, “Luck versus Skill in the Cross-Section of Mutual Fund Returns.” They do find modest evidence of skill at the right tail end of the distribution under the capital asset pricing model. After controlling for the value, size, and momentum factor premiums (discussed below), however, evidence of net-of-fee skill is not significantly different than zero.
26. Shiller, “Efficient Markets vs. Excess Volatility.”
27. Professor Goetzmann of the Yale School of Management has a introductory hyper-text textbook on modern portfolio theory available on his website, “An Introduction to Investment Theory.”
28. In the language of modern portfolio theory this risk is known at a security’s beta. Mathematically it is the covariance of the security’s returns with the market’s returns, divided by the variance of the market’s returns.
29. Setton, “The Berkshire Bunch.”
30. For example, Grossman and Stiglitz prove in “On the Impossibility of Informationally Efficient Markets” that market efficiency cannot be an equilibrium because without excess returns there is no incentive for arbitrageurs to correct mispricings. More recently, Markowitz, one of fathers of modern portfolio theory, showed in “Market Efficiency: A Theoretical Distinction and So What” that if a couple key assumptions of MPT are relaxed, the market portfolio is no longer optimal for most investors.
31. Basu, “Investment Performance of Common Stocks in Relation to their Price-Earnings Ratios: A Test of the Efficient Market Hypothesis.”
32. Fama and French, “The Cross-Section of Expected Stock Returns.”
33. Jegadeesh and Titman, “Returns to Buying Winners and Selling Losers: Implications for Stock Market Efficiency”
34. Ibid. 31.
35. Pastor and Stambaugh, “Liquidity Risk and Expected Stock Returns.”
36. Jegadeesh, “Evidence of Predictable Behavior or Security Returns.”
37. Froot and Thaler, “Anomalies: Foreign Exchange.”
38. Campbell and Shiller, “Yield Spreads and Interest Rate Movements: A Bird’s Eye View.”
39. Erb and Harvey, “The Tactical and Strategic Value of Commodity Futures.”
40. Novy-Marx, “The Other Side of Value: The Gross Profitability Premium.”
41. Thaler, “Seasonal Movements in Security Prices.”
42. Mitchell and Pulvino, “Characteristics of Risk and Return in Risk Arbitrage.”
43. See McLean and Pontiff, “Does Academic Research Destroy Stock Return Predictability?” A meta analysis of 82 equity return factors was able to replicate 72 using out of sample data.
44. Fama and French, “Size and Book-to-Market Factors in Earnings and Returns.”
45. Daniel and Titman, “Evidence on the Characteristics of Cross Sectional Variation in Stock Returns.”
46. Hwang and Rubesam, “Is Value Really Riskier than Growth?”
47. Numerous investor profiles have expounded on the difficulty of being a rational investor in an irrational market. In a recent article in Institutional Investor, Asness and Liew give a highly readable overview of the risk vs. mispricing debate and discuss the problems they encountered launching a value-oriented hedge fund in the middle of the dot-com bubble.
48. Bar-Eli, “Action Bias Among Elite Soccer Goalkeepers: The Case of Penalty Kicks. Journal of Economic Psychology.”
49. Asch, “Opinions and Social Pressure.”
50. Daniel et al provides one of the most thorough theoretical discussions on how certain common cognitive biases can result in systematically biased security prices in “Investor Psychology and Security Market Under- and Overreaction.”
51. Schleifer and Vishny, “The Limits of Arbitrage.”
52. Data provided by Vanguard.
53. Chapter 2 of Goetzmann’s “An Introduction to Investment Theory” provides an introductory discussion.
54. The Black-Litterman model allows investors to combine their estimates of expected returns with equilibrium implied returns in a Bayesian framework that largely overcomes the input-sensitivity problems associated with traditional mean-variance optimization. Idzorek offers a thorough introduction in “A Step-By-Step Guide to the Black-Litterman Model.”
55. Ilmanen’s “Expected Returns on Major Asset Classes” provides a detailed explanation of the theory and evidence of forecasting expected returns.
56. Walkshausl and Lobe, “Fundamental Indexing Around the World.”
57. Buetow et al, “The Benefits of Rebalancing.”
58. Kuhnen and Knutson, “The Neural Basis of Financial Risk Taking.”
59. Dammon et al, “Optimal Asset Location and Allocation with Taxable and Tax-Deferred Investing.”
60. Bodie and Crane, “Personal Investing: Advice, Theory, and Evidence from a Survey of TIAA-CREF Participants.”
61. Yongseok Shin of the Federal Reserve provides a brief review of the literature on this research in “Financial Markets: An Engine for Economic Growth.”
Asch, Solomon E. "Opinions and Social Pressure." Scientific American 193, no. 5 (12 1955).
Ashenfelter, Orley. "Predicting the Quality and Prices of Bordeaux Wine*." The Economic Journal 118, no. 529 (12 2008).
Asness, Clifford and Liew, John. “The Great Divide over Market Efficiency.” Institutional Investor, March 3, 2014.
Asness, Clifford, Moskowitz, Tobias, and Pedersen, Lasse. “Value and Momentum Everywhere.” The Journal of Finance 68, no. 3 (6, 2013).
Bar-Eli, Michael, Ofer H. Azar, Ilana Ritov, Yael Keidar-Levin, and Galit Schein. "Action Bias among Elite Soccer Goalkeepers: The Case of Penalty Kicks." Journal of Economic Psychology 28, no. 5 (12 2007).
Barber, Brad M., and Terrance Odean. "Trading Is Hazardous to Your Wealth: The Common Stock Investment Performance of Individual Investors." The Journal of Finance 55, no. 2 (12 2000).
Basu, S. "Investment Performance of Common Stocks in Relation to Their Price-Earnings Ratios: A Test of the Efficient Market Hypothesis."The Journal of Finance 32, no. 3 (12 1977).
Benartzi, S., and R. H. Thaler. "Myopic Loss Aversion and the Equity Premium Puzzle." The Quarterly Journal of Economics110, no. 1 (12, 1995).
Bodie, Zvi, and Dwight B. Crane. "Personal Investing: Advice, Theory, and Evidence." Financial Analysts Journal 53, no. 6 (12 1997).
Buetow, Gerald W., Ronald Sellers, Donald Trotter, Elaine Hunt, and Willie A. Whipple. "The Benefits of Rebalancing." The Journal of Portfolio Management 28, no. 2 (12 2002).
Campbell, John and Shiller, Robert. “Yield Spreads and Interest Rate Movements: A Bird’s Eye View.” The Econometrics of Financial Markets, 58 no. 3 (1991).
Carroll, John S., Richard L. Wiener, Dan Coates, Jolene Galegher, and James J. Alibrio. "Evaluation, Diagnosis, and Prediction in Parole Decision Making." Law & Society Review 17, no. 1 (12 1982).
Dammon, Robert M., Chester S. Spatt, and Harold H. Zhang. "Optimal Asset Location and Allocation with Taxable and Tax-Deferred Investing." The Journal of Finance 59, no. 3 (12 2004).
Daniel, Kent, and Sheridan Titman. "Evidence on the Characteristics of Cross Sectional Variation in Stock Returns." The Journal of Finance52, no. 1 (12 1997).
Daniel, Kent, Hirshleifer, David, and Subrahmanyam, Avanidhar. “Investor Psychology and Security Market Under- and Overreactions.” The Journal of Finance, 53 no. 6 (1998).
Dimson, Elroy, Marsh, Paul, and Staunton, Mike. Triumph of the Optimists. Princeton: Princeton University Press, 2002.
Erb, Cfa Claude B., and Campbell R. Harvey. "The Strategic and Tactical Value of Commodity Futures." CFA Digest 36, no. 3 (12 2006).
Fama, Eugene F., and Kenneth R. French. "The Cross-Section of Expected Stock Returns." The Journal of Finance 47, no. 2 (12 1992).
Fama, Eugene F., and Kenneth R. French. "Luck versus Skill in the Cross-Section of Mutual Fund Returns." The Journal of Finance65, no. 5 (12 2010).
Fama, Eugene F., and Kenneth R. French. "Size and Book-to-Market Factors in Earnings and Returns."The Journal of Finance 50, no. 1 (12 1995).
French, Kenneth and Poterba, James. “Investor Diversification and International Equity Markets.” American Economic Review (1991).
Froot, Kenneth A., and Richard H. Thaler. "Anomalies: Foreign Exchange." Journal of Economic Perspectives 4, no. 3 (12 1990).
“Guide to the Markets.” J.P. Morgan Asset Management. 2014
Goetzmann, William. An Introduction to Investment Theory. Yale School of Management. Accessed April 09, 2014. http://viking.som.yale.edu/will/finman540/classnotes/notes.html
Grossman, Sanford and Stiglitz, Joseph. “On the Impossibility of Informationally Efficent Markets.” The American Economic Review 70, no. 3 (6, 1980).
Hwang, Soosung and Rubesam, Alexandre. “Is Value Really Riskier Than Growth? An Answer with Time-Varying Return Reversal.” Journal of Banking and Finance, 37 no. 7 (2013).
Idzorek, Thomas. “A Step-by-Step Guide to the Black-Litterman Model.” Ibbotson Associates (2005).
Ilmanen, Antti. “Expected Returns on Major Asset Classes.” Research Foundation of CFA Institute (2012).
Jegadeesh, Narasimhan, and Sheridan Titman. "Returns to Buying Winners and Selling Losers: Implications for Stock Market Efficiency." The Journal of Finance48, no. 1 (12 1993).
Kahneman, Daniel. Thinking, Fast and Slow. New York: Farrar, Straus and Giroux, 2011.
Kruger, Justin, and David Dunning. "Unskilled and Unaware of It: How Difficulties in Recognizing One's Own Incompetence Lead to Inflated Self-assessments." Journal of Personality and Social Psychology77, no. 6 (12 1999).
Kuhnen, Camelia M., and Brian Knutson. "The Neural Basis of Financial Risk Taking." Neuron 47, no. 5 (12 2005).
Malkiel, Burton. A Random Walk Down Wall Street: Time-Tested Strategies for Successful Investing (Tenth Edition). New York: W.W. Norton & Company, 2012.
Mankiw, N.gregory, and Stephen P. Zeldes. "The Consumption of Stockholders and Nonstockholders." Journal of Financial Economics 29, no. 1 (12 1991).
Markowitz, Harry M. "Market Efficiency: A Theoretical Distinction and So What?" Financial Analysts Journal 61, no. 5 (12 2005).
McLean, David and Pontiff, Jeffrey. “Does Academic Research Destroy Stock Return Predictability?” Working Paper, (2013).
Mitchell, Mark, and Todd Pulvino. "Characteristics of Risk and Return in Risk Arbitrage." The Journal of Finance 56, no. 6 (12 2001).
Novy-Marx, Robert. "The Other Side of Value: The Gross Profitability Premium." Journal of Financial Economics 108, no. 1 (12 2013).
Pastor, Lubos and Stambaugh, Robert. “Liquidity Risk and Expected Stock Returns.” The Journal of Political Economy, 111 no. 3 (6, 2003).
Preston, Caroline E., and Stanley Harris. "Psychology of Drivers in Traffic Accidents." Journal of Applied Psychology 49, no. 4 (12 1965).
Setton, Dolly. “The Berkshire Bunch.” Forbes, October 12, 1998.
Shleifer, Andrei, and Robert W. Vishny. "The Limits of Arbitrage."The Journal of Finance 52, no. 1 (12 1997).
Siegel, Jeremy J. Stocks for the Long Run: The Definitive Guide to Financial Market Returns and Long-term Investment Strategies (Forth Edition). New York: McGraw-Hill, 2008.
Shiller, Robert. “Understanding Recent Trends in House Prices and Homeownership.” Housing, Housing Finance and Monetary Policy, Jackson Hole Conference Series, Federal Reserve Bank of Kansas City, 2008, pp. 85-123
Shiller, Robert. “Efficient Markets vs. Excess Volatility.” Yale. Accessed April 09, 2014. http://oyc.yale.edu/economics/econ-252-08/lecture-6
Shin, Yongseok. “Financial Markets: An Engine for Economic Growth.” The Regional Economist (July 2013).
Stillwell, William G., F.hutton Barron, and Ward Edwards. "Evaluating Credit Applications: A Validation of Multiattribute Utility Weight Elicitation Techniques."Organizational Behavior and Human Performance 32, no. 1 (12 1983).
Svenson, Ola. "Are We All Less Risky and More Skillful than Our Fellow Drivers?" Acta Psychologica47, no. 2 (12 1981).
Swets, J. A., R. M. Dawes, and J. Monahan. "Psychological Science Can Improve Diagnostic Decisions."Psychological Science in the Public Interest 1, no. 1 (12, 2000).
Thaler, Richard. "Anomalies: Seasonal Movements in Security Prices II: Weekend, Holiday, Turn of the Month, and Intraday Effects."Journal of Economic Perspectives1, no. 2 (12 1987).
Thornton, B. "Toward a Linear Prediction Model of Marital Happiness." Personality and Social Psychology Bulletin 3, no. 4 (12, 1977).
"U.S. Stock Ownership Stays at Record Low." Gallup. Accessed April 09, 2014. http://www.gallup.com/poll/162353/stock-ownership-stays-record-low.aspx.
Walkshäusl, Christian, and Sebastian Lobe. "Fundamental Indexing around the World." Review of Financial Economics 19, no. 3 (12 2010).
Zuckerman, Ezra W., and John T. Jost. "What Makes You Think You're so Popular? Self-Evaluation Maintenance and the Subjective Side of the "Friendship Paradox""Social Psychology Quarterly 64, no. 3 (12 2001).
Zweig, Jason. Your Money and Your Brain: How the New Science of Neuroeconomics Can Help Make You Rich. New York: Simon & Schuster, 2007.
I wish to thank Romeo Stevens for the feedback and proofreading he provided for early drafts of this paper. You should go buy his Mealsquares (just look how happy I look eating them there!)
If the section on statistical prediction rules sounded familiar it's probably because I stole all the examples from this Less Wrong article by lukeprog about them. After you're done giving this article karma you should go give that one some more.
After I made my South Bay meetup presentation Peter McCluskey wrote on the Bay Area LW mailing list that "Your paper's report of 'a massive study of the sixteen countries that had data on local stock, bond, and cash returns available for every year of the twentieth century' could be considered a study of survivorship bias, in that it uses criteria which exclude countries where stocks lost 100% at some point (Russia, Poland, China, Hungary)." This is a good point and is worth addressing, which some researchers have done in recent years. Dimson, Marsh, and Staunton (2006) find that the surviving markets of the 20th century I cite in my paper dominated the global market capitalization in 1900 and the effect of national stock-market implosions was mostly negligible on worldwide averages. Peter did go on to say that "I don't know of better advice for the average person than to invest in equities, and I have most of my wealth in equities..." so I think we're mostly on the same page at least in terms of practical advice.
In a conversation with Alyssa Vance she similarly expressed skepticism that the equity risk premium has been significantly greater than zero due to the fact that at some point in the 20th century most major economies experienced double-digit inflation and very high marginal rates of taxation on capital income. It is true that taxes and inflation significantly dilute an investor's return, and one would be foolish to ignore their effects. But while they may reduce the absolute attractiveness of equities, the effects of taxes and inflation actually make stocks look more attractive relative to the alternatives of bonds and cash investments. In the US and most jurisdictions, the dividends and capital gains earned on stocks are taxed at preferential rates relative to the interest earned on fixed income investments, which is typically taxed as ordinary income. Furthermore, the majority of individual investors hold a large fraction of their investments in tax-sheltered accounts (such as 401(k)s and IRAs in the US).
At my South Bay meetup presentation, Patrick LaVictoire (among others) expressed incredulity at my claim that retail investors have on average badly underperformed relevant benchmarks and that by implication institutional investors have outperformed. The source I cite in my paper is gated but there is plenty of research on actual investor performance. Morningstar regularly publishes info on how investors routinely underperform the mutual funds they invest in by buying into and selling out of them at the wrong times. Finding data on institutional investors is a little trickier but Busse, Goyal, and Wahal (2010) find that institutional investors managing e.g. pensions, foundations, and endowments on average outperform the broad US equity market in the US equity sleeve of their portfolios. (the language of that paper sounds much more pessimistic, with "alphas are statistically indistinguishable from zero" in the abstract. The key is that they are controlling for the size, value, and momentum effects discussed in my paper. In other words, once we account for the fact that institutional investors are taking advantage of the factor premiums that have been shown to most consistently outperform a simple index strategy, they aren't providing any extra value. This ties in with the idea of "shrinking alpha" or "smart beta" that is currently en vogue in my industry.)
I'm happy to address further questions and criticisms in the comments.
Here's a link to a short op-ed about some tips to develop self-control. The author get them from talking with Walter Mischel, a researcher who correlated impulsiveness as a child (measured by ability to delay eating sweets) and various metrics as an adult (education attainment/cocaine use/weight). Mischel has a new book coming out, but this is not a review of the book. I thought this might be of interest because it talks a little about how self-control is a skill that can be developed and even gave some specific things to do.
1. If possible remove unhelpful triggers from your environment. If not possible, try to reduce the emotional appeal of the trigger by mentally associating it with something unpleasant. One example he gives is imagining a cockroach crawling on the chocolate mousse that a server at a restaurant offers.
2. Develop specific if-then plans such as "if it is before noon, I won't check email" or "If I feel angry, I will count backward from ten." The goal of these kinds of checks is to introduce a delay between impulse and action during which you are reminded of your goal and have a chance to consider the impact of following the impulse on that goal.
3. Link the behavior that you want to modify to a "burning goal" so that you have emotional impetus to actually make the desired change.
I attended Nick Bostrom's talk at UC Berkeley last Friday and got intrigued by these problems again. I wanted to pitch an idea here, with the question: Have any of you seen work along these lines before? Can you recommend any papers or posts? Are you interested in collaborating on this angle in further depth?
The problem I'm thinking about (surely naively, relative to y'all) is: What would you want to program an omnipotent machine to optimize?
For the sake of avoiding some baggage, I'm not going to assume this machine is "superintelligent" or an AGI. Rather, I'm going to call it a supercontroller, just something omnipotently effective at optimizing some function of what it perceives in its environment.
As has been noted in other arguments, a supercontroller that optimizes the number of paperclips in the universe would be a disaster. Maybe any supercontroller that was insensitive to human values would be a disaster. What constitutes a disaster? An end of human history. If we're all killed and our memories wiped out to make more efficient paperclip-making machines, then it's as if we never existed. That is existential risk.
The challenge is: how can one formulate an abstract objective function that would preserve human history and its evolving continuity?
I'd like to propose an answer that depends on the notion of logical depth as proposed by C.H. Bennett and outlined in section 7.7 of Li and Vitanyi's An Introduction to Kolmogorov Complexity and Its Applications which I'm sure many of you have handy. Logical depth is a super fascinating complexity measure that Li and Vitanyi summarize thusly:
Logical depth is the necessary number of steps in the deductive or causal path connecting an object with its plausible origin. Formally, it is the time required by a universal computer to compute the object from its compressed original description.
The mathematics is fascinating and better read in the original Bennett paper than here. Suffice it presently to summarize some of its interesting properties, for the sake of intuition.
- "Plausible origins" here are incompressible, i.e. algorithmically random.
- As a first pass, the depth D(x) of a string x is the least amount of time it takes to output the string from an incompressible program.
- There's a free parameter that has to do with precision that I won't get into here.
- Both a string of length n that is comprised entirely of 1's, and a string of length n of independent random bits are both shallow. The first is shallow because it can be produced by a constant-sized program in time n. The second is shallow because there exists an incompressible program that is the output string plus a constant sized print function that produces the output in time n.
- An example of a deeper string is the string of length n that for each digit i encodes the answer to the ith enumerated satisfiability problem. Very deep strings can involve diagonalization.
- Like Kolmogorov complexity, there is an absolute and a relative version. Let D(x/w) be the least time it takes to output x from a program that is incompressible relative to w,
- It can be updated with observed progress in human history at time t' by replacing ht with ht'. You could imagine generalizing this to something that dynamically updated in real time.
- This is a quite conservative function, in that it severely punishes computation that does not depend on human history for its input. It is so conservative that it might result in, just to throw it out there, unnecessary militancy against extra-terrestrial life.
- There are lots of devils in the details. The precision parameter I glossed over. The problem of representing human history and the state of the universe. The incomputability of logical depth (of course it's incomputable!). My purpose here is to contribute to the formal framework for modeling these kinds of problems. The difficult work, like in most machine learning problems, becomes feature representation, sensing, and efficient convergence on the objective.
It's unlikely that by pure chance we are currently writing the correct number of LW posts. So it might be useful to try to figure out if we're currently writing too few or too many LW posts. If commenters are evenly divided on this question then we're probably close to the optimal number; otherwise we have an opportunity to improve. Here's my case for why we should be writing more posts.
Let's say you came up with a new and useful life hack, you have a novel line of argument on an important topic, or you stumbled across some academic research that seems valuable and isn't frequently discussed on Less Wrong. How valuable would it be for you to share your findings by writing up at post for Less Wrong?
Recently I visited a friend of mine and commented on the extremely bright lights he had in his room. He referenced this LW post written over a year ago. That got me thinking. The bright lights in my friend's room make his life better every day, for a small upfront cost. And my friend is probably just one of tens or hundreds of people to use bright lights this way as a result of that post. Given that the technique seems to be effective, that number will probably continue going up, and will grow exponentially via word of mouth (useful memes tend to spread). So by my reckoning, chaosmage has created and will create a lot of utility. If they had kept that idea to themselves, I suspect they would have captured less than 1% of the total value to be had from the idea.
You can reach orders of magnitude more people writing an obscure Less Wrong comment than you can talking to a few people at a party in person. For example, at least 100 logged in users read this fairly obscure comment of mine. So if you're going to discuss an important topic, it's often best to do it online. Given enough eyeballs, all bugs in human reasoning are shallow.
Yes, peoples' time does have opportunity costs. But people are on Less Wrong because they need a break anyway. (If you're a LW addict, you might try the technique I describe in this post for dealing with your addiction. If you're dealing with serious cravings, for LW or video games or drugs or anything else, perhaps look at N-acetylcysteine... a variety of studies suggest it helps reduce cravings (behavioral addictions are pretty similar to drug addictions neurologically btw), it has a good safety profile, and you can buy it on Amazon. Not prescribed by doctors because it's not approved by the FDA. Yes, you could use willpower (it's worked so well in the past...) or you could hit the "stop craving things as much" button, and then try using willpower. Amazing what you can learn on Less Wrong isn't it?)
And LW does a good job of indexing content by how much utility people are going to get out of it. It's easy to look at a post's keywords and score and guess if it's worth reading. If your post is bad it will vanish in to obscurity and few will be significantly harmed. (Unless it's bad and inflammatory, or bad with a linkbait title... please don't write posts like that.) If your post is good, it will spread virally on its own and you'll generate untold utility.
Given that above-average posts get read much more than below-average posts, if you're post's expected quality is average, sharing it on Less Wrong has a high positive expected utility. Like Paul Graham, I think we should be spreading our net wide and trying to capture all of the winners we can.
I'm going to call out a particular subset of LW commenters in particular. If you're a commenter and you (a) have at least 100 karma, (b) it's over 80% positive, and (c) you have a draft post with valuable new ideas you've been sitting on for a while, you should totally polish it off and share it with us! In general, the better your track record, the more you should be inclined to share ideas that seem valuable. Worst case you can delete your post and cut your losses.
I have recently come across a very practical example of a kind of "tragedy of the commons" - the unwillingness to invest in assets that benefit stakeholders indiscriminately. Specifically, on large strata-title apartment projects there is a reluctance to implement such measures as:
- central hot water heating (~ 10% lower all-up costs, ~20% lower operating costs)
- Solar hot water heating (>20% ROI)
- Solar electric power (~10% ROI)
UNLESS some kind of user-pays system is implemented, which would use up pretty much all of the gains.
The concern is of course that providing the above systems would create a "commons" that would tend to be exploited.
I am curious if there are any ideas on a usable solutions, perhaps some kind of workable protocol that would enable the above, or existing success stories - what made them work?
I think it'd be a good idea to keep a list of the ways we'd like to see LessWrong improve, sorted by popularity. Ie. email alerts for new responses.
So if you have an idea for how LessWrong could be better, post it in the comments. As people up/downvote, we'll get a sense for what the consensus opinions are.
I think there's a pretty good amount to be gained by improving LessWrong.
- I think there's a lot of low-hanging fruit (like email alerts for new responses).
- Conversations here are actually useful and productive. Facilitating conversation should thus lead to more of these useful and productive conversations (as opposed to leading to more of an unproductive type of conversation). (Sorry, I didn't word this well; hopefully you guys know what I mean.)
- Perhaps something big would come out of this list (like meet-ups). Perhaps rationality hack-a-thons (whatever that means)?
Note: I say "ways to improve" instead of "features" because "ways to improve" is more general.
Some time back, I wrote that I was unwilling to continue with investigations into mass downvoting, and asked people for suggestions on how to deal with them from now on. The top-voted proposal in that thread suggested making Viliam_Bur into a moderator, and Viliam gracefully accepted the nomination. So I have given him moderator privileges and also put him in contact with jackk, who provided me with the information necessary to deal with the previous cases. Future requests about mass downvote investigations should be directed to Viliam.
Thanks a lot for agreeing to take up this responsibility, Viliam! It's not an easy one, but I'm very grateful that you're willing to do it. Please post a comment here so that we can reward you with some extra upvotes. :)
This summary was posted to LW Main on September 5th. The following week's summary is here.
Irregularly scheduled Less Wrong meetups are taking place in:
- Bratislava: 08 September 2014 06:00PM
- Copenhagen - September: This Wavefunction Has Uncollapsed: 13 September 2014 03:00PM
- Houston, TX: 13 September 2014 02:00PM
- Michigan Meetup: 07 September 2014 02:00PM
- Urbana-Champaign: Practical Rationality: 07 September 2014 02:00PM
- Utrecht: Improve your productivity: 06 September 2014 02:00PM
- Utrecht: Debiasing techniques: 21 September 2014 02:00PM
- Utrecht: Effective Altruism and Politics: 05 October 2014 02:00PM
- Utrecht: Artificial Intelligence: 19 October 2014 02:00PM
- Utrecht: Climate Change: 02 November 2014 03:00PM
The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:
- Austin, TX: 06 September 2025 01:30PM
- [Cambridge MA] Prediction Markets and Futarchy: 07 September 2014 03:30PM
- Canberra: Akrasia-busters!: 13 September 2014 06:00PM
- [Melbourne] September Rationality Dojo - Fixed and Growth Mindset: 07 September 2014 03:30PM
- Moscow Meetup: Codename Felix: 14 September 2014 10:11PM
- Sydney Rationality Dojo - Habits: 07 September 2014 04:00PM
- Sydney Meetup - September: 24 September 2014 06:30PM
- Vienna - Superintelligence: 27 September 2014 03:00PM
Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Buffalo, Cambridge UK, Canberra, Columbus, London, Madison WI, Melbourne, Moscow, Mountain View, New York, Philadelphia, Research Triangle NC, Seattle, Sydney, Toronto, Vienna, Washington DC, Waterloo, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers.
I'm interested in how easy it would be to simulate just one present-day person's life rather than an entire planet's worth of people. Currently our chatbots are bad enough that we could not populate the world with NPC's; the lone human would quickly figure out that everyone else was... different, duller, incomprehensibly stupid, etc.
But what if the chatbots were designed by a superintelligent AI?
If a superintelligent AI was simulating my entire life from birth, would it be able to do it (for reasonably low computational resources cost, i.e. less than the cost of simulating another person) without simulating any other people in sufficient detail that they would be people?
I suspect that the answer is yes. If the answer is "maybe" or "no," I would very much like to hear tips on how to tell whether someone is an ideal chatbot.
EDIT: In the comments most people are asking me to clarify what I mean by various things. By popular demand:
I interact with people in more ways than just textual communication. I also hear them, and see them move about. So when I speak of chatbots I don't mean bots that can do nothing but chat. I mean an algorithm governing the behavior of a simulated entire-human-body, that is nowhere near the complexity of a brain. (Modern chatbots are algorithms governing the behavior of a simulated human-hands-typing-on-keyboard, that are nowhere near the complexity of a brain.)
When I spoke of "simulating any other people in sufficient detail that they would be people" I didn't mean to launch us into a philosophical discussion of consciousness or personhood. I take it to be common ground among all of us here that very simple algorithms, such as modern chatbots, are not people. By contrast, many of us think that a simulated human brain would be a person. Assuming a simulated human brain would be a person, but a simple chatbot-like algorithm would not, my question is: Would any algorithm complex enough to fool me into thinking it was a person over the course of repeated interactions actually be a person? Or could all the bodies around me be governed by algorithms which are too simple to be people?
I realize that we have no consensus on how complex an algorithm needs to be to be a person. That's OK. I'm hoping that this conversation can answer my questions anyhow; I'm expecting answers along the lines of
(A) "For a program only a few orders of magnitude more complicated than current chatbots, you could be reliably fooled your whole life" or
(B) "Any program capable of fooling you would either draw from massive databases of pre-planned responses, which would be impractical, or actually simulate human-like reasoning."
These answers wouldn't settle the question for good without a theory of personhood, but that's OK with me, these answers would be plenty good enough.
Do Virtual Humans deserve human rights?
I think the idea of storing our minds in a machine so that we can keep on "living" (and I use that term loosely) is fascinating and certainly and oft discussed topic around here. However, in thinking about keeping our brains on a hard drive we have to think about rights and how that all works together. Indeed the technology may be here before we know it so I think its important to think about mindclones. If I create a little version of myself that can answer my emails for me, can I delete him when I'm done with him or just turn him in for a new model like I do iPhones?
I look forward to the discussion.
I get pretty anxious about open-ended decisions. I often spend an unacceptable amount of time agonizing over things like what design options to get on a custom suit, or what kind of job I want to pursue, or what apartment I want to live in. Some of these decisions are obviously important ones, with implications for my future happiness. However, in general my sense of anxiety is poorly calibrated with the importance of the decision. This makes life harder than it has to be, and lowers my productivity.
I moved apartments recently, and I decided that this would be a good time to address my anxiety about open-ended decisions. My hope is to present some ideas that will be helpful for others with similar anxieties, or to stimulate helpful discussion.
One promising way of dealing with decision anxiety is to practice making decisions without worrying about them quite so much. Match your clothes together in a new way, even if you're not 100% sure that you like the resulting outfit. Buy a new set of headphones, even if it isn't the “perfect choice.” Aim for good enough. Remind yourself that life will be okay if your clothes are slightly mismatched for one day.
This is basically exposure therapy – exposing oneself to a slightly aversive stimulus while remaining calm about it. Doing something you're (mildly) afraid to do can have a tremendously positive impact when you try it and realize that it wasn't all that bad. Of course, you can always start small and build up to bolder activities as your anxieties diminish.
For the past several months, I had been practicing this with small decisions. With the move approaching in July, I needed some more tricks for dealing with a bigger, more important decision.
Reasoning with yourself
It helps to think up reasons why your anxieties aren't justified. As in actual, honest-to-goodness reasons that you think are true. Check out this conversation between my System 1 and System 2 that happened just after my roommates and I made a decision on an apartment:
System 1: Oh man, this neighborhood [the old neighborhood] is such a great place to go for walks. It's so scenic and calm. I'm going to miss that. The new neighborhood isn't as pretty.
System 2: Well that's true, but how many walks did we actually take in five years living in the old neighborhood? If I recall correctly, we didn't even take two per year.
System 1: Well, yeah... but...
System 2: So maybe “how good the neighborhood is for taking walks” isn't actually that important to us. At least not to the extent that you're feeling. There were things that we really liked about our old living situation, but taking walks really wasn't one of them.
System 1: Yeah, you may be right...
Of course, this “conversation” took place after the decision had already been made. But making a difficult decision often entails second-guessing oneself, and this too can be a source of great anxiety. As in the above, I find that poking holes in my own anxieties really makes me feel better. I do this by being a good skeptic and turning on my critical thinking skills – only instead of, say, debunking an article on pseudoscience, I'm debunking my own worries about how bad things are going to be. This helps me remain calm.
The last piece of this process is something that should help when making future decisions. I reasoned that if my System 1 feels anxiety about things that aren't very important – if it is, as I said, poorly calibrated – then I perhaps I can re-calibrate it.
Before moving apartments, I decided to make predictions about what aspects of the new living situation would affect my happiness. “How good the neighborhood is for walks” may not be important to me, but surely there are some factors that are important. So I wrote down things that I thought would be good and bad about the new place. I also rated them on how good or bad I thought they would be.
In several months, I plan to go back over that list and compare my predicted feelings to my actual feelings. What was I right about? This will hopefully give my System 1 a strong impetus to re-calibrate, and only feel anxious about aspects of a decision that are strongly correlated with my future happiness.
I think we each carry in our heads a model of what is possible for us to achieve, and anxiety about the choices we make limits how bold we can be in trying new things. As a result, I think that my attempts to feel less anxiety about decisions will be very valuable to me, and allow me to do things that I couldn't do before. At the same time, I expect that making decisions of all kinds will be a quicker and more pleasant process, which is a great outcome in and of itself.
I found the below link which is in the spirit of Lifestyle interventions to increase longevity:
> Medical researchers have been steadily building evidence that prolonged sitting is awful for your health. One major problem is that blood can pool in the legs of a seated person, causing arteries to start losing their ability to control the rate of blood flow. A new experimental study (abstract) has discovered it's quite easy to negate these detrimental health effects: all you need to do is take a leisurely, 5-minute walk for every hour you sit. "The researchers were able to demonstrate that during a three-hour period, the flow-mediated dilation, or the expansion of the arteries as a result of increased blood flow, of the main artery in the legs was impaired by as much as 50 percent after just one hour. The study participants who walked for five minutes for each hour of sitting saw their arterial function stay the same — it did not drop throughout the three-hour period. Thosar says it is likely that the increase in muscle activity and blood flow accounts for this."
I have returned from a particularly fruitful Google search, with unexpected results.
My question was simple. I was pretty sure that talking to myself aloud makes me temporarily better at solving problems that need a lot of working memory. It is a thinking tool that I find to be of great value, and that I imagine would be of interest to anyone who'd like to optimize their problem solving. I just wanted to collect some evidence on that, make sure I'm not deluding myself, and possibly learn how to enhance the effect.
This might be just lousy Googling on my part, but the evidence is surprisingly unclear and disorganized. There are at least three seperate Wiki pages for it. They don't link to each other. Instead they present the distinct models of three seperate fields: autocommunication in communication studies, semiotics and other cultural studies, intrapersonal communication ("self-talk" redirects here) in anthropology and (older) psychology and private speech in developmental psychology. The first is useless for my purpose, the second mentions "may increase concentration and retention" with no source, the third confirms my suspicion that this behavior boosts memory, motivation and creativity, but it only talks about children.
Google Scholar yields lots of sports-related results for "self-talk" because it can apparently improve the performance of athletes and if there's something that obviously needs the optimization power of psychology departments, it is competitive sports. For "intrapersonal communication" it has papers indicating it helps in language acquisition and in dealing with social anxiety. Both are dwarfed by the results for "private speech", which again focus on children. There's very little on "autocommunication" and what is there has nothing to do with the functioning of individual minds.
So there's a bunch of converging pieces of evidence supporting the usefulness of this behavior, but they're from several seperate fields that don't seem to have noticed each other very much. How often do you find that?
Let me quickly list a few ways that I find it plausible to imagine talking to yourself could enhance rational thought.
- It taps the phonological loop, a distinct part of working memory that might otherwise sit idle in non-auditory tasks. More memory is always better, right?
- Auditory information is retained more easily, so making thoughts auditory helps remember them later.
- It lets you commit to thoughts, and build upon them, in a way that is more powerful (and slower) than unspoken thought while less powerful (but quicker) than action. (I don't have a good online source for this one, but Inside Jokes should convince you, and has lots of new cognitive science to boot.)
- System 1 does seem to understand language, especially if it does not use complex grammar - so this might be a useful way for results of System 2 reasoning to be propagated. Compare affirmations. Anecdotally, whenever I'm starting a complex task, I find stating my intent out loud makes a huge difference in how well the various submodules of my mind cooperate.
- It lets separate parts of your mind communicate in a fairly natural fashion, slows each of them down to the speed of your tongue and makes them not interrupt each other so much. (This is being used as a psychotherapy method.) In effect, your mouth becomes a kind of talking stick in their discussion.
All told, if you're talking to yourself you should be more able to solve complex problems than somebody of your IQ who doesn't, although somebody of your IQ with a pen and a piece of paper should still outthink both of you.
Given all that, I'm surprised this doesn't appear to have been discussed on LessWrong. Honesty: Beyond Internal Truth comes close but goes past it. Again, this might be me failing to use a search engine, but I think this is worth more of our attention that it has gotten so far.
I'm now almost certain talking to myself is useful, and I already find hindsight bias trying to convince me I've always been so sure. But I wasn't - I was suspicious because talking to yourself is an early warning sign of schizophrenia, and is frequent in dementia. But in those cases, it might simply be an autoregulatory response to failing working memory, not a pathogenetic element. After all, its memory enhancing effect is what the developmental psychologists say the kids use it for. I do expect social stigma, which is why I avoid talking to myself when around uninvolved or unsympathetic people, but my solving of complex problems tends to happen away from those anyway so that hasn't been an issue really.
So, what do you think? Useful?
Consequentialism traditionally doesn't distinguish between acts of commission or acts of omission. Not flipping the lever to the left is equivalent with flipping it to the right.
But there seems one clear case where the distinction is important. Consider a moral learning agent. It must act in accordance with human morality and desires, which it is currently unclear about.
For example, it may consider whether to forcibly wirehead everyone. If it does so, they everyone will agree, for the rest of their existence, that the wireheading was the right thing to do. Therefore across the whole future span of human preferences, humans agree that wireheading was correct, apart from a very brief period of objection in the immediate future. Given that human preferences are known to be inconsistent, this seems to imply that forcible wireheading is the right thing to do (if you happen to personally approve of forcible wireheading, replace that example with some other forcible rewriting of human preferences).
What went wrong there? Well, this doesn't respect "conversation of moral evidence": the AI got the moral values it wanted, but only though the actions it took. This is very close to the omission/commission distinction. We'd want the AI to not take actions (commission) that determines the (expectation of the) moral evidence it gets. Instead, we'd want the moral evidence to accrue "naturally", without interference and manipulation from the AI (omission).
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one.
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
Edit: Sorry, I didn't realize there has been so much discussion on this already! I thought I had just stumbled across some obscure product haha. Anyway, I've been reading through discussions here, on Hacker News, Tim Ferris' blog etc. There's been a lot of talk about whether or not this is truly a "replacement for eating" (or whatever the term is). I think the more interesting question is whether it's a good idea to:
- Have Soylent once or twice a day.
- Have whole food snacks throughout the day like cheerios, trail mix, fruits etc.
- Have a nice big dinner each day.
- Maybe focus more on whole foods on weekends when you have more time.
My initial impression is that it is a good idea to use it as a once or twice a day thing.
- It saves time. To me, this is huge.
- It saves money.
- It makes it easier to eat fewer calories, fat, sugar and salt. I'm surprised this health benefit isn't talked about more. Most american diets have way too much of these four things. I think Soylent helps in this area for two main reasons: a) It makes you full faster. b) It doesn't have as much calories/fat/sugar/salt as you a typical diet probably does.
- It is probably way more nutritious than the meal it's replacing. Typical diets probably are lacking in certain nutrients, and Soylent will probably help to "fill in these gaps". Again, another huge benefit that I'm surprised doesn't get talked about as much (although this doesn't apply for people who use multivitamins).
- There really don't seem to be anything unhealthy about having it once or twice a day. I'm not very confident about this claim because it hasn't been studied enough, but so far I haven't heard of anyone experiencing health problems from Soylent* as a once or twice a day thing, and meal replacement stuff like Soylent seems to have been around for a while and hasn't caused anyone any problems.
*The two main problems (digestive issues and headaches) seem to be sufficiently addressed by 1. Adopting it slowly into your diet (over the course of 5 days or so) and 2. Making sure you get enough salt.
Original Post: you could ignore this if you're familiar with Soylent
I've just came across a meal replacement drink called Soylent - http://www.soylent.me/.
- Cheap (~$3/meal)
- Fast (just add water to the powder, no cooking or cleaning)
- I could work while I drink it (I'm a slow eater and don't like to work while I'm eating, so this would save me a lot of time)
- Doesn't go bad for about 2 years
- It may be lacking certain essential nutrients.
- It may have detrimental effects on my health in the long-term.
- Tube feeding has been around for a while and doesn't seem to have any long-term effects (from what I know).
- There doesn't seem to be anything odd about the ingredients that would be detrimental. When you eat food and digest it, it becomes something pretty similar to what's in the formula. In fact, it seems that the ingredients in the formula are simpler than the components of whole foods, and thus there should be less stress on your digestive system.
- Meal replacement drinks have been around for a while and don't seem to have any long-term effects (from what I know).
However I really don't have enough information to make any reasonably strong conclusions. Those bullet points above are more vague suspicions than evidence backed knowledge.
So do any of you guys know anything about Soylent or meal replacement drinks/bars/etc.? Are they healthy? Are there things I haven't accounted for?
Also, I'm sorta surprised this isn't more popular. Most people I know hate cooking and cleaning and shopping and spending so much time and money on food. I think most people would be more than happy to have Soylent (or something similar) for a meal or two each day, and then have a big dinner or something. It would save a ton of money and time, and would reduce the amount of fat and sugar in the persons diet. And because you're spending less money on food and consuming less fat and sugar, you could justify eating out or ordering in a splurge meal more often! What do you guys think? Why isn't this more popular? Are people really that afraid of the health effects?
(I'm not being hypocritical. I know that *I've* been asking about the health effects and seem to be worried about them, but I wouldn't think most people would approach this the same way I am. If I lived on an island isolated from other people, was told about Soylent and asked what I think it's popularity is, I would guess it to be very high. I would think people would see that it's pretty nutritious, aren't really any known risks or reason to think there would be risks, and be eager to save time and money by using Soylent).
The article may be gated. (I have a subscription through my school.)
So what is the secret of looking into the future? Initial results from the Good Judgment Project suggest the following approaches. First, some basic training in probabilistic reasoning helps to produce better forecasts. Second, teams of good forecasters produce better results than good forecasters working alone. Third, actively open-minded people prosper as forecasters.
But the Good Judgment Project also hints at why so many experts are such terrible forecasters. It’s not so much that they lack training, teamwork and open-mindedness – although some of these qualities are in shorter supply than others. It’s that most forecasters aren’t actually seriously and single-mindedly trying to see into the future. If they were, they’d keep score and try to improve their predictions based on past errors. They don’t.
The TV series 'Utopia' just started.
"The series follows a cast of 15 men and women who are placed in isolation and filmed twenty-four hours a day for one year. The cast must create their own society and figure out how to survive. The series will be shown twice a week, but there will be online streaming 24/7 with 129 hidden and unhidden cameras all over the Utopia compound. The live streams will begin on August 29, the day when the 15 pioneers will enter Utopia. Over 5,000 people auditioned for the series. Every month three pioneers will be nominated and could be sent back to their everyday lives. The live streamers will decide which new pioneers get their chance to become Utopian." (source: http://en.wikipedia.org/wiki/Utopia_(U.S._reality_TV_series))
Since every month new 'pioneers' will be introduced, you can still audition for the series; here's how: http://www.utopiatvcasting.com/how-to-audition. I would love to see a well-trained rationalist teaching "the world" some applied rationality principles, and I think this TV show would be an excellent medium to reach the "average person". It would also be nice to see someone explaining what Utopia means to a transhumanist. Let us know if you apply.
Assume you had access to
1. a self-improving artificial general intelligence (AGI) of at least human-capability or better; and
2. a lie detector that the AGI is known to be susceptible to. A lie for this purpose is any intentional deception.
The AI is not omnipotent. It is running on a physical machine in the real world with limited inputs. One of these inputs is mechanism by which you can ask it questions, and receive answers filtered through the lie detector. A little light on the box turns red if the machine was intentionally deceptive in any way, or green otherwise. As it happens the lie detector is probabilistic, but the chance of a false negative (lie escaping detection) is infinitesimal--1 in 10^-40, let's say, but assume you could make it even smaller if you needed to. The chance of a false positive (non-deception being labeled a lie) is effectively zero.
What could you do with this setup?
EDIT: Some of the details above are provided simply to make the problem tractable. E.g. having the deception detector be probabilistic allows its operating requirements to be less than the AI itself. If the AI is running on a supercomputer, it seems intractable to require a super-supercomputer for the deception detector.
EDIT2: Deception in this case simply means falsifying or omitting information. Loosely speaking, if the AI falsifies the computational traces fed into the deception detector, or leaves out any information in its response, this is detectable. Presumably the UFAI could output a very nice, very convincing plan of action with very tiny fine print hidden somewhere along the lines of "PS: This is all a lie! You implement this and I'm going to turn you all into meaty paperclips. Haha!" and it would get past the deception detector. But I would rather discussion not get sidetracked by such legalistic, nitpicky scenarios. Assume the humans involved are competent, conservative, responsible people who have setup institutional safeguards to prevent hasty action and make sure that output is sufficiently analyzed down to the very last digital bit by a competent, international team of highly rational people before being acted upon.
I just learned of The Future Library project. In short, famous authors will be asked to write new, original fiction that will not be released until 2114. First one announced was Margaret Atwood, of The Handmaiden's Tale fame.
I learned of this when a friend posted on Facebook that "I'm officially looking into being cryogenically frozen due to The Future Library project. See you all in 2114." She meant it as a joke, but after a couple comments she now knows about CI, and she didn't yesterday.
What's one of the most common complaints we hear from Deathists? The future is unknown and scary and there won't be anything there they'd be interested in anyway. Now there will be, if they're Atwood fans.
What's one of the ways artists who give away most of their work (almost all of them nowadays) try to entice people to pay for their albums/books/games/whatever? Including special content that is only available for people who pay (or who pay more). Now there is special content only available for people who are around post-2113.
Which got me to thinking... could we incentivize seeing the future? I know it sounds kinda silly ("What, escaping utter annihilation isn't incentive enough??"), but it seems possible that we could save lives by compiling original work from popular artists (writers, musicians, etc), sealing it tight somewhere, and promising to release it in 100, 200, maybe 250 years. And of course, providing links to cryo resources with all publicity materials.
Would this be worth pursuing? Are there any obvious downsides, aside from cost & difficulty?
This summary was posted to LW main on August 29th. The following week's summary is here.
Irregularly scheduled Less Wrong meetups are taking place in:
- Atlanta August meetup - Media representations: 30 August 2014 07:00PM
- Bratislava: 08 September 2014 06:00PM
- Houston, TX: 13 September 2014 02:00PM
- Urbana-Champaign: Reconstituting: 31 August 2014 02:00PM
- [Utrecht] Topic to be determinined: 06 September 2014 02:00PM
- [Utrecht] Debiasing techniques: 20 September 2014 02:00PM
The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:
- Canberra: Akrasia-busters!: 13 September 2014 06:00PM
- London Meetup - Effective Altruism: 31 August 2014 02:00PM
- [Melbourne] September Rationality Dojo - Fixed and Growth Mindset: 07 September 2014 03:30PM
- Moscow Meetup: 31 August 2014 02:00PM
- Washington, D.C.: Parkour: 31 August 2014 03:00PM
- West LA Meetup: Lightning Talks: 03 September 2014 07:00PM
Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Buffalo, Cambridge UK, Canberra, Columbus, London, Madison WI, Melbourne, Moscow, Mountain View, New York, Philadelphia, Research Triangle NC, Seattle, Sydney, Toronto, Vienna, Washington DC, Waterloo, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers.
EDIT: The fundraiser was successfully completed, raising the full $500 for worthwhile charities. Yay!
Today's my birthday! And per Peter Hurford's suggestion, I'm holding a birthday fundraiser to help raise money for MIRI, GiveDirectly, and Mercy for Animals. If you like my activity on LW or elsewhere, please consider giving a few dollars to one of these organizations via the fundraiser page. You can specify which organization you wish to donate in the comment of the donation, or just leave it unspecified, in which case I'll give your donation to MIRI.
If you don't happen to be particularly altruistically motivated, just consider it a birthday gift to me - it will give me warm fuzzies to know that I helped move money for worthy organizations. And if you are altruistically motivated but don't care about me in particular, maybe you still can get yourself to donate more than usual by hacky stuff like someone you know on the Internet having a birthday. :)
If someone else wants to hold their own birthday fundraiser, here are some tips: birthday fundraisers.
It seems that politicians make a lot of decisions that aren't socially optimal because they want money from lobbyists and other campaign contributors. Presumably, the purpose this money serves is to keep them in office by allowing them to advertise a lot the next time they're up for reelection.
So the question then becomes, "why do they want to remain in office?". I could think of two reasons: money and power. From what I know, politicians have a pretty high salary (congressmen make ~$175k), so that's an understandable motivator. But power is the one I don't understand.
Supposedly they want to remain in office so they could use their power to have an influence. I don't know too much about politics, but it seems that politicians spend most of their time catering to lobbyists and voters rather than pushing the things they actually believe in. So much so that they aren't actually exerting that much power. And it seems that most of this catering is to special interests and is socially suboptimal. (I may very well be wrong on these points. I really don't know but it's the impression I get.)
Why are congressmen so motivated to stay in office, make $175k a year, exert a minimal amount of real power, and spend their time catering to lobbyists and making socially suboptimal decisions? I'm sure they could make twice as much in the private sector. I feel like there's something obvious that I'm missing here, but I'm genuinely confused.
Although I feel that Nick Bostrom’s new book “Superintelligence” is generally awesome and a well-needed milestone for the field, I do have one quibble: both he and Steve Omohundro appear to be more convinced than I am by the assumption that an AI will naturally tend to retain its goals as it reaches a deeper understanding of the world and of itself. I’ve written a short essay on this issue from my physics perspective, available at http://arxiv.org/pdf/1409.0813.pdf.
give you, some we can't, few have been written up and even fewer in any
well-organized way. Benja or Nate might be able to expound in more detail
while I'm in my seclusion.
Very briefly, though:
The problem of utility functions turning out to be ill-defined in light of
new discoveries of the universe is what Peter de Blanc named an
"ontological crisis" (not necessarily a particularly good name, but it's
what we've been using locally).
The way I would phrase this problem now is that an expected utility
maximizer makes comparisons between quantities that have the type
"expected utility conditional on an action", which means that the AI's
utility function must be something that can assign utility-numbers to the
AI's model of reality, and these numbers must have the further property
that there is some computationally feasible approximation for calculating
expected utilities relative to the AI's probabilistic beliefs. This is a
constraint that rules out the vast majority of all completely chaotic and
uninteresting utility functions, but does not rule out, say, "make lots of
Models also have the property of being Bayes-updated using sensory
information; for the sake of discussion let's also say that models are
about universes that can generate sensory information, so that these
models can be probabilistically falsified or confirmed. Then an
"ontological crisis" occurs when the hypothesis that best fits sensory
information corresponds to a model that the utility function doesn't run
on, or doesn't detect any utility-having objects in. The example of
"immortal souls" is a reasonable one. Suppose we had an AI that had a
naturalistic version of a Solomonoff prior, a language for specifying
universes that could have produced its sensory data. Suppose we tried to
give it a utility function that would look through any given model, detect
things corresponding to immortal souls, and value those things. Even if
the immortal-soul-detecting utility function works perfectly (it would in
fact detect all immortal souls) this utility function will not detect
anything in many (representations of) universes, and in particular it will
not detect anything in the (representations of) universes we think have
most of the probability mass for explaining our own world. In this case
the AI's behavior is undefined until you tell me more things about the AI;
an obvious possibility is that the AI would choose most of its actions
based on low-probability scenarios in which hidden immortal souls existed
that its actions could affect. (Note that even in this case the utility
function is stable!)
Since we don't know the final laws of physics and could easily be
surprised by further discoveries in the laws of physics, it seems pretty
clear that we shouldn't be specifying a utility function over exact
physical states relative to the Standard Model, because if the Standard
Model is even slightly wrong we get an ontological crisis. Of course
there are all sorts of extremely good reasons we should not try to do this
anyway, some of which are touched on in your draft; there just is no
simple function of physics that gives us something good to maximize. See
also Complexity of Value, Fragility of Value, indirect normativity, the
whole reason for a drive behind CEV, and so on. We're almost certainly
going to be using some sort of utility-learning algorithm, the learned
utilities are going to bind to modeled final physics by way of modeled
higher levels of representation which are known to be imperfect, and we're
going to have to figure out how to preserve the model and learned
utilities through shifts of representation. E.g., the AI discovers that
humans are made of atoms rather than being ontologically fundamental
humans, and furthermore the AI's multi-level representations of reality
evolve to use a different sort of approximation for "humans", but that's
okay because our utility-learning mechanism also says how to re-bind the
learned information through an ontological shift.
This sorta thing ain't going to be easy which is the other big reason to
start working on it well in advance. I point out however that this
doesn't seem unthinkable in human terms. We discovered that brains are
made of neurons but were nonetheless able to maintain an intuitive grasp
on what it means for them to be happy, and we don't throw away all that
info each time a new physical discovery is made. The kind of cognition we
want does not seem inherently self-contradictory.
Three other quick remarks:
*) Natural selection is not a consequentialist, nor is it the sort of
consequentialist that can sufficiently precisely predict the results of
modifications that the basic argument should go through for its stability.
The Omohundrian/Yudkowskian argument is not that we can take an arbitrary
stupid young AI and it will be smart enough to self-modify in a way that
preserves its values, but rather that most AIs that don't self-destruct
will eventually end up at a stable fixed-point of coherent
consequentialist values. This could easily involve a step where, e.g., an
AI that started out with a neural-style delta-rule policy-reinforcement
learning algorithm, or an AI that started out as a big soup of
self-modifying heuristics, is "taken over" by whatever part of the AI
first learns to do consequentialist reasoning about code. But this
process doesn't repeat indefinitely; it stabilizes when there's a
consequentialist self-modifier with a coherent utility function that can
precisely predict the results of self-modifications. The part where this
does happen to an initial AI that is under this threshold of stability is
a big part of the problem of Friendly AI and it's why MIRI works on tiling
agents and so on!
*) Natural selection is not a consequentialist, nor is it the sort of
consequentialist that can sufficiently precisely predict the results of
modifications that the basic argument should go through for its stability.
It built humans to be consequentialists that would value sex, not value
inclusive genetic fitness, and not value being faithful to natural
selection's optimization criterion. Well, that's dumb, and of course the
result is that humans don't optimize for inclusive genetic fitness.
Natural selection was just stupid like that. But that doesn't mean
there's a generic process whereby an agent rejects its "purpose" in the
light of exogenously appearing preference criteria. Natural selection's
anthropomorphized "purpose" in making human brains is just not the same as
the cognitive purposes represented in those brains. We're not talking
about spontaneous rejection of internal cognitive purposes based on their
causal origins failing to meet some exogenously-materializing criterion of
validity. Our rejection of "maximize inclusive genetic fitness" is not an
exogenous rejection of something that was explicitly represented in us,
that we were explicitly being consequentialists for. It's a rejection of
something that was never an explicitly represented terminal value in the
first place. Similarly the stability argument for sufficiently advanced
self-modifiers doesn't go through a step where the successor form of the
AI reasons about the intentions of the previous step and respects them
apart from its constructed utility function. So the lack of any universal
preference of this sort is not a general obstacle to stable
*) The case of natural selection does not illustrate a universal
computational constraint, it illustrates something that we could
anthropomorphize as a foolish design error. Consider humans building Deep
Blue. We built Deep Blue to attach a sort of default value to queens and
central control in its position evaluation function, but Deep Blue is
still perfectly able to sacrifice queens and central control alike if the
position reaches a checkmate thereby. In other words, although an agent
needs crystallized instrumental goals, it is also perfectly reasonable to
have an agent which never knowingly sacrifices the terminally defined
utilities for the crystallized instrumental goals if the two conflict;
indeed "instrumental value of X" is simply "probabilistic belief that X
leads to terminal utility achievement", which is sensibly revised in the
presence of any overriding information about the terminal utility. To put
it another way, in a rational agent, the only way a loose generalization
about instrumental expected-value can conflict with and trump terminal
actual-value is if the agent doesn't know it, i.e., it does something that
it reasonably expected to lead to terminal value, but it was wrong.
This has been very off-the-cuff and I think I should hand this over to
Nate or Benja if further replies are needed, if that's all right.
"NRx" vs. "Prog" Assumptions: Locating the Sources of Disagreement Between Neoreactionaries and Progressives (Part 1)
I know that many people on LessWrong want nothing to do with "neoreaction." It does seem strange that a website commonly associated with techno-futurism, such as LessWrong, would end up with even the most tangential networked association with an intellectual current, such as neoreaction, that commonly includes nostalgia for absolute monarchies and other avatistic obessions.
Perhaps blame it on Yvain, AKA Scott Alexander of slatestarcodex.com for attaching this strange intellectual node to LessWrong. ; ) That's at least how I found out about neoreaction, and I doubt that I am alone in this.
Certainly many on LessWrong would view any association with "neoreaction" as a Greek gift to be avoided. I understand the concept of keeping "well-kept gardens" and of politics being the "mind-killer," although some at LessWrong have argued that some of the most important questions humanity will face in the next decades will be questions that are unavoidably "political" in nature. Yes, "politics is hard mode," but so is life itself, and you don't get better at hard mode without practicing in hard mode.
LessWrong proclaims itself as a community devoted to refining the art of rationality. One aspect of the art of rationality is locating the true sources of disagreement between two parties who want to communicate with each other, but who can't help but talk past each other in different languages due to having radically different pre-existing assumptions.
I believe that this is the problem that any discourse between neoreaction and progressivism currently faces.
Even if you have no interest at all in neoreaction or progressivism as ideologies, I invite you to read this analysis as a case study in locating sources of disagreement between ideologies that have different unspoken assumptions. I will try to steelman neoreaction as much as I can, despite the fact that I am more sympathetic to the progressivist point of view.
In particular, I am interested in the following question: to what extent do neoreactionary and progressive disagreements stem from judgments that merely differ in degree? (For example, being slightly more or less pessimistic about X, Y, and Z propositions). Or to what extent do neoreactionary and progressive disagreements stem from assumptions that are qualitatively different?
Normative vs. descriptive assumptions
"Normative" statements are "ought" statements, or judgments of value. "Descriptive" statements are "is" statements, or depictions of reality. While neoreaction and progressivism have a lot of differing descriptive assumptions, there is really only one fundamental normative disagreement, which I will address first.
Normative disagreement #1: Progressivism's subjective values vs. Neoreaction's objective[?] values
As I see it, Progressivism says, "Our subjective values are worth pursuing in and of themselves just because it makes us feel good. It does not particularly matter where our values come from. Perhaps we are Cartesian dualists—unmoved movers with free will—who invent our values in an act of existential creation. Or perhaps our values are biological programming—spandrels manufactured by Nature, or as the neoreactionaries personify it, "Gnon." It doesn't matter. In principle, if we could rewire our reward circuits to give us pleasure/fun/novelty/happiness/sadness/tragedy/suffering/whatever we desire* in response to whatever Nature had the automatic (or modified) disposition to offer us, then those good feelings would be just as worthwhile as anything else. (This is why neoreactionaries perceive progressive values as "nihilistic.")
According to this formulation, most LessWrongers, being averse to wireheading in principle, are not full-fledged progressives at this most fundamental level. (Perhaps this explains some of the counter-intuitive overlap between the LessWrong and neoreactionary thoughtsphere....)
[Editorial: In my view, coming to terms with the obvious benefit of wireheading is the ultimate "red pill" to swallow. I am a progressive who would happily wirehead as long as I had concluded beforehand that I had adequately secured its completely automatic perpetuation even in the absence of any further input from me...although an optional override to shut it down and return me to the non-wireheaded state would not be unwelcome, just in case I had miscalculated and found that the system did not attend to my every wish as anticipated.]
*Note that I am aware that our subjective values are complex and that we are "Godshatter." Nevertheless, this does not seem to me to be a fundamental impediment to wireheading. In principle, we should be able to dissect every last little bit of this "Godshatter" and figure out exactly what we want in all of its diversity...and then we can start designing a system of wireheading to give it to us. Is this not what Friendly AI is all about? Doesn't Friendly AI = Wireheading Done "Right"? Alternatively, we could re-wire ourselves to not be Godshatter, and to have a very simple list of things that would make us feel good. I am open to either one. LessWrongers, being neoreactionaries at heart (see below), would insist on maintaining our human complexity, our Godshatter values, and making our wireheading laboriously work around that. Okay, fine. I'll compromise...as long as I get my wireheading in some form. ; )
Neoreaction says, "There is objective value in the principle of "perpetuating biological and/or civilizational complexity" itself*; the best way to perpetuate biological and/or civilizational complexity is to "serve Gnon" (i.e. devote our efforts to fulfilling nature's pre-requisites for perpetuating our biologial and/or civilizational complexity); our subjective values are spandrels manufactured by natural selection/Gnon; insofar as our subjective values motivate us to serve Gnon and thereby ensure the perpetuation of biological and/or civilizational complexity, our subjective values are useful. (For example, natural selection makes sex a subjective value by making it pleasurable, which then motivates us to perpetuate our biological complexity). But, insofar as our subjective values mislead us from serving Gnon (such as by making non-procreative sex still feel good) and jeopardize our biological/civilizational perpetuation, we must sacrifice our subjective values for the objective good of perpetuating our biological/civilizational complexity" (such as by buckling down and having procreative sex even if one would personally rather not enjoy raising kids).
*Note that different NRx thinkers might have different definitions about what counts as biological or civilizational "complexity" worthy of perpetuating...it could be "Western Civilization," "the White Race," "Homo sapiens," "one's own genetic material," "intelligence, whether encoded in human brains or silicon AI," "human complexity/Godshatter," etc. This has led to the so-called "neoreactionary trichotomy"—3 wings of the neoreactionary movement: Christian traditionalists, ethno-nationalists, and techno-commercialists.
Most LessWrongers probably agree with neoreactionaries on this fundamental normative assumption, with the typical objective good of LessWrongers being "human complexity/Godshatter," and thus the "techno-commercialist" wing of neoreaction being the one that typically finds the most interest among LessWrongers.
[Editorial: pesumably, each neoreactionary is choosing his/her objective target of allegiance (such as "Western Civilization") because of the warm fuzzies that the idea elicits in him/herself. Has it ever occurred to neoreactionaries that humans' occasional predilection for being awed by a system bigger than themselves (such as "Western Civilization") and sacrificing for that system is itself a "mere" evolutionary spandrel?]
Now, in an attempt to steelman neoreaction's normative assumption, I would characterize it thus: "In the most ultimate sense, neoreactionaries find the pursuit of subjective values just as worthwhile as progressives do. However, neoreactionaries are aware that human beings are short-sighted creatures with finite discount windows. If we tell ourselves that we should pursue our subjective values, we won't end up pursuing those subjective values in a farsighted way that involves, for example, maintaining a functioning civilization so that people continue to follow laws and don't rob or stab each other. Instead, we will invariably party it up and pursue short-term subjective values to the detriment of our long-term subjective values. So instead of admitting to ourselves that we are really interested in subjective value in the long run, we have to tell ourselves a noble lie that we are actually serving some higher objective purpose in order to motivate our primate brains to stick to what will happen to be good for subjective values in the long run."
Indeed, I have found some neoreactionary writers muse on the problem of wanting to believe in God because it would serve as a unifying and motivating objective good, and lamenting the fact that they cannot bring themselves to do so.
Now, onto the descriptive disagreements....
Descriptive assumption #1: Humanity can master nature (progressivism) vs. Nature will always end up mastering humanity (neoreaction).
Whereas progressives tend to have optimism that humankind can incrementally master the laws of nature (not change them, but master them, as in intelligently work around them, much like how we have worked around but not changed gravitation by inventing airplanes), neoreactionaries have a dour pessimism that humankind under-estimates the extent to which the laws of nature constantly pull our puppet strings. Far from being able to ever master nature, humankind will always be mastered by nature, by nature's command to "race to the bottom" in order to out-reproduce, out-compete one's rivals, even if that means having to sacrifice the nice things in life.
For specific ways in which nature threatens to master humanity unless humanity somehow finds a way to exert tremendous efforts at collective coordination against nature, see Scott Alexander's "Meditations on Moloch."
Most progressives presumably hold out hope that we can collectively coordinate to overcome Moloch. If nature and its incentives threaten humanity with the strongest and most ruthless conquering the weak and charitable, perhaps we create a world government to prevent that. If nature and its incentives drive down wages to subsistence level, perhaps we create a global minimum wage. If humanity is threatened with dysgenic decline, perhaps a democratic world government organizes a eugenics program.
Descriptive assumption #2: On average, people have, or can be trained to have, far-sighted discount functions (progressivism), vs. people typically have short-sighted discount functions (neoreaction).
Part of the progressive assumption about humanity being able to master nature is that ordinary people are rational enough to see the big picture and submit to such controls if they are needed to avoid the disasters of Moloch. Part of the neoreactionary assumption about nature always mastering humanity is that, except for some bright outliers, most people are short-sighted primates who will insist on trading long-term well-being for short-term frills.
Descriptive assumption #3: Culture is a variable mostly dependent on material conditions (progressivism) vs. Culture is an independent variable with respect to material conditions (neoreaction).
Neoreactionaries often claim that life seems so much better in modern times in comparison to, say, 400 years ago, only because of our technological advancement since then has compensated for, and hidden, how our culture has rotted in the meantime. Neoreactionaries argue that, if one could combine our modern technology with, let's say, an absolute monarchy, then life would be so much better. This assumption of being able to mix & match material conditions and political systems, or material conditions and culture, depends on an assumption that culture and social institutions are essentially independent variables. Perhaps with enough will, we can try to make any set of technologies work well with any set of cultural and social institutions.
Progressives, whether they realize it or not, are probably subtly influenced, instead, by the "historical materialist" (AKA Marxist) view of society which argues that certain material conditions and material incentives tend to automatically generate certain cultural and social responses.
For example, to Marx, increased agricultural productivity in the late middle ages and Renaissance due to better agricultural technologies was a pre-requisite for the "Acts of Enclosure" in England, which booted the "surplus" farmers off of the farms and into the cities as propertyless proletarians who would be willing to work for a wage. Likewise, technologies like steam power were pre-requisites for providing an unprecedentedly profitable way of employing these proletarians to make a profit. (Otherwise, the proletarians might have just been left to rot on the street unemployed, with their numbers dwindling in Malthusian fashion). And because there were new avenues for making a profit, the people who stood to gain from chasing these new profit incentives produced new cultural habits and laws that would enable them to pursue these incentives more effectively. One of these new sets of laws was "laissez-faire" economics. Another was liberal democracy.
To a progressive, the proposition that we could, even theoretically, run our modern technological society through an absolute monarchy would probably seem preposterous. It is not even an option. Our modern society is too complex, with too many conflicting interests to reconcile through any system that prohibits the peaceful discovery and negotiation of these varied interests through a democratic process involving "voice." In reality, people are not content with being able only to exercise the "right of exit" from institutions or governments that they don't like. Perhaps the powerless have no choice but to immigrate. But elites have, historically, more often chosen to stand and fight rather than gracefully exit. Hence, feudalism, civil wars brought on by crises of royal succession, Masonic orders, factions, political parties, "special interest groups," and so on.
Progressives would say, "Do you honestly think that you can tame these beasts, when even a dictator like Hitler was just as much beholden to juggling interest groups and power blocs around him as he was the real dictator of events?" Ah, but the neoreactionaries will say, "Hitler's Nazism was still "demotist." It made the mistake of trying to justify itself to the public, if not through elections, then at least implicitly. We won't do that." To which progressives might say, "You might not want to justify yourself to the rabble and to elite power blocs, but they will demand it—and not because they are all infected by some mysterious mental virus called the "Cathedral," but because they see a way to gain an advantage through politics, and in the modern era they have the means and coordination to effectively fight for it."
These are just examples. The take-away point is that, for progressives, culture appears to be more of a dependent variable, not a variable that is independent of material conditions. So, according to progressives, you can't say, "Let's just combine today's technology with absolute monarchy, and voilà!"
Descriptive assumption #4: Western society is currently anabolic/ascendant (progressivism) vs. catabolic/decadent (neoreaction).
Neoreaction often gets caricatured as claiming that "things are getting worse" or "have been getting worse for the past x number of years." This paints a weak straw-man of neoreaction because, on the surface, things seem so much "obviously" better now than ever. However, this isn't quite what neoreactionaries claim.
Neoreactionaries actually claim that Western society is decaying (note the subtle difference). Western society is gradually weakening its ability to reproduce itself. It is, to use a farming metaphor, eating up its seed-corn on present consumption, on insant gratification, which causes things to seem really swell on the surface...for now. However, according to neoreactionaries, conditions might not yet be getting worse on average (although they will point to inner city violence and other signs that conditions already have started to get worse in some places), but Western society's "capital stock" is getting worse, is already dwindling.
Envisioned more broadly, a society's "capital" is not just its money. It is its entire basket of tangible and intangible assets that help it reproduce and expand itself. So a society's "capital" would also include things like its citizens, its birth rates, its habits of harmonious gender relations, its education, its habits of civil propriety, its sustaining myths (such as patriotism or religion), its infrastructure, its environmental health [although NRxers tend to not focus on this], etc.
Another term for "decadence" might be "catabolic collapse." A catabolic collapse is when an organism starts consuming its own muscles, its own seed-corn, if you will, in a last-ditch effort to stay alive. By contrast, an "anabolic" process is one that builds muscle—one that saves up capital, if you will. (Hence, "anabolic" steroids).
Neoreactionaries believe that Western society is currently headed for a "catabolic collapse." (See John Michael Greer, author of "How Civilizations Fall: A Theory of Catabolic Collapse." Oddly enough, John Michael Greer started out 10 years ago as a trendy name in anarcho-primitivist intellectual circles. Now his ideas have been embraced by some neoreactionaries such as Nick Land, which makes me ponder whether anarcho-primitivism is really of the "left" or "right" to begin with...)
When it comes to progressives, most, I think, would argue that Western society is not currently catabolic/decadent. Granted, they would point to some problems with "unsustainability," especially with regards to environmental pollution, resource depletion, and maybe public debt levels (especially worrisome to the libertarian-minded). But on the whole, progressives are still optimistic that these problems can be overcome without rolling back liberal democracy.
Now, let's look at some specific worries that neoreaction has about Western decadence....
Descriptive Assumption #5: Our biggest population threat is overshoot and the attendant resource depletion, environmental pollution, and immiseration of living standards (progressivism) vs. Our biggest population threat is a demographic death spiral (neoreaction).
One thing I have noticed when looking at neoreactionary websites is that they are really obsessed with birth rates! They argue that countries with fertility below replacement level are on the road to annihilation. I found this interesting because my first impulse is to feel like this globe is getting too damn crowded.
Perhaps neoreactionaries envision the birth rates to stay below replacement level from here on out—that this is a permanent change. Perhaps they foresee world population following a sort of bell-shaped curve. My naive progressive assumption is that our population is already in a slight overshoot beyond what can be sustained at our current level of technology, and that any present declines in birth rates are probably just enough to bring us into the oscillating plateau of a typical S-shaped popoulation curve, and that better economic prospects could easily reverse the trend. My naive progressive assumption is that raising kids will remain sufficiently fun and interesting to a large enough pool of adults that, given enough of a feeling of economic security, people will happily continue having kids in sufficient numbers to prevent a die-off of Homo sapiens. In other words, most progressives like myself would not see the need to roll back gender norms in Western society at the present time for the sake of popping out more babies.
Perhaps what worries neoreactionaries, though, is not so much the fear of a global planetary baby shortage, but rather a localized baby shortage among Westerners or Whites. Maybe they fear that all babies are not created equal....
Descriptive assumption #6: "Immigrants are OK" (progressivism) vs. "Immigrants will jeopardize Western Civilization/the White Race/intelligent human complexity/etc." (neoreaction)
Progressives say, "It is not a big deal if Western society has to import some immigrants to keep its population topped off. Immigrant cultures will eventually blend with the "nativist" culture. Historically, this has turned out OK, despite xenophobic fears every time that it will end in disaster. The immigrants will mostly assimilate into the nativist culture. The nativist culture will pick up a few new habits from the immigrants (some of them helpful, some of them harmful, but on the balance nothing disastrous). Nor will the immigrants dirty the nativist gene pool with bad genes. As far as we can tell so far, no significant genetic differences in intelligence and/or physical vigor exist between immigrants and non-immigrants."
Neoreactionaries say, "It is a very big deal if Western society has to import some immigrants to keep its population topped off. Immigrant cultures will not assimilate with the nativist culture. Immigrant cultures will end up imparting a net influene of bad habits on the native culture. Civil decency will be eroded. Crime and societal dysfunction will increase. The native gene pool will also be dirtied with lower-intelligence immigrant genes. (And the only reason we can't see this is because the progressive Establishment AKA the "Cathedral" has systematically distorted the research and discourse around IQ). At worst, Western cities will act as "IQ Shredders." Any intelligent immigrants who seize economic opportunities in wealthy Western cities will see their fertility rates plummet, and the idiots will inherit the Earth à la the movie "Idiocracy"."
More to come in subsequent parts....
My beliefs about the integers are a little fuzzy. I believe the things that ZFC can prove about the integers, but there seems to be more than that. In particular, I intuitively believe that "my beliefs about the integers are consistent, because the integers exist". That's an uncomfortable situation to be in, because we know that a consistent theory can't assert its own consistency.
Should I conclude that my beliefs about the integers can't be covered by any single formal theory? That's a tempting line of thought, but it reminds me of all these people claiming that the human mind is uncomputable, or that humans will always be smarter than machines. It feels like being on the wrong side of history.
It's also dangerous to believe that "the integers exist" due to my having clear intuitions about them, because humans sometimes make mistakes. Before Russell's paradox, someone could be forgiven for thinking that the objects of naive set theory "exist" because they have clear intuitions about sets, but they would be wrong nonetheless.
Let's explore the other direction instead. What if there was some way to extrapolate my fuzzy beliefs about the integers? In full generality, the outcome of such a process should be a Turing machine that prints sentences about integers which I believe in. Such a machine would encode some effectively generated theory about the integers, which we know cannot assert its own consistency and be consistent at the same time.
So it seems that in the process of extracting my "consistent extrapolated beliefs", something has to give. At some point, my belief in my own consistency has to go, if I want the final result to be consistent.
But if I already know that much about the outcome, it might make sense for me to change my beliefs now, and end up with something like this: "All my beliefs about the integers follow from some specific formal theory that I don't know yet. In particular, I don't believe that my beliefs about the integers are consistent."
I'm not sure if there are gaps in the above reasoning, and I don't know if using probabilistic reflection changes the conclusions any. What do you think?
At present, the LessWrong presence in Brisbane is essentially non-existent. We have Brisbane Skeptics in the Pub, and that's the closest you can get. During the most recent Australia-wide LessWrong hangout, Nick Wolf of Melbourne and Eliot Redelman of Sydney persuaded me to create a Facebook group for LessWrong in Brisbane. This post is solely to announce that.
The group can be found here.
Ideally a meetup will occur once more than the small handful currently on the group have joined.
- Version control
- Linear algebra
- Advanced math
- Bayesian statistics
- Category theory
- Foreign languages
- How to not waste time
Mine are: quantum mechanics, Python, cooking, the language of philosophy.
What learning curve do you wish you'd climbed sooner? Give reasons and stories if you feel like it. Do you think other people should climb the same curves?
Related to: Policy Debates Should Not Appear One-Sided
There is a well-known fable which runs thus:
“Driven by hunger, a fox tried to reach some grapes hanging high on the vine but was unable to, although he leaped with all his strength. As he went away, the fox remarked 'Oh, you aren't even ripe yet! I don't need any sour grapes.' People who speak disparagingly of things that they cannot attain would do well to apply this story to themselves.”
This gives rise to the common expression ‘sour grapes’, referring to a situation in which one incorrectly claims to not care about something to save face or feel better after being unable to get it.
This seems to be related to a general phenomenon, in which motivated cognition leads one to flinch away from the prospect of an action that is inconvenient or painful in the short term by concluding that a less-painful option strictly dominates the more-painful one.
In the fox’s case, the allegedly-dominating option is believing (or professing) that he did not want the grapes. This spares him the pain of feeling impotent in face of his initial failure, or the embarrassment of others thinking him to have failed. If he can’t get the grapes anyway, then he might as well erase the fact that he ever wanted them, right? The problem is that considering this line of reasoning will make it more tempting to conclude that the option really was dominating—that he really couldn’t have gotten the grapes. But maybe he could’ve gotten the grapes with a bit more work—by getting a ladder, or making a hook, or Doing More Squats in order to Improve His Vert.
The fable of the fox and the grapes doesn’t feel like a perfect fit, though, because the fox doesn’t engage in any conscious deliberation before giving up on sour grapes; the whole thing takes place subconsciously. Here are some other examples that more closely illustrate the idea of conscious rationalization by use of overly convenient partitions:
“Be who you are and say what you feel, because those who mind don't matter and those who matter don't mind.”
This advice is neither good in full generality nor bad in full generality. Clearly there are some situations where some person is worrying too much about other people judging them, or is anxious about inconveniencing others without taking their own preferences into account. But there are also clearly situations (like dealing with an unpleasant, incompetent boss) where fully exposing oneself or saying whatever comes into one’s head is not strategic and outright disastrous. Without taking into account the specifics of the situation of the recipient of the advice, it is of limited use.
It is convenient to absolve oneself of blame by writing off anybody who challenges our first impulse as someone who ‘doesn’t matter’; it means that if something goes wrong, one can avoid the painful task of analysing and modifying one’s behaviour.
In particular, we have the following corollary:
The Fundamental Fallacy of Dating:
“Be yourself and don’t hide who you are. Be up-front about what you want. If it puts your date off, then they wouldn’t have been good for you anyway, and you’ve dodged a bullet!”
In the short-term it is convenient to not have to filter or reflect on what one says (face-to-face) or writes (online dating). In the longer term, having no filter is not a smart way to approach dating. As the biases and heuristics program has shown, people are often mistaken about what they would prefer under reflection, and are often inefficient and irrational in pursuing what they want. There are complicated courtship conventions governing timelines for revealing information about oneself and negotiating preferences, that have evolved to work around these irrationalities, to the benefit of both parties. In particular, people are dynamically inconsistent, and willing to compromise a lot more later on in a courtship than they thought they would earlier on; it is often a favour to both of you to respect established boundaries regarding revealing information and getting ahead of the current stage of the relationship.
For those who have not much practised the skill of avoiding triggering Too Much Information reactions, it can feel painful and disingenuous to even try changing their behaviour, and they rationalise it via the Fundamental Fallacy. At any given moment, changing this behaviour is painful and causes a flinch reaction, even though the value of information of trying a different approach might be very high, and might cause less pain (e.g. through reduced loneliness) in the long term.
We also have:
PR rationalization and incrimination:
“There’s already enough ammunition out there if anybody wants to assassinate my character, launch a smear campaign, or perform a hatchet job. Nothing I say at this point could make it worse, so there’s no reason to censor myself.”
This is an overly convenient excuse. It does not take into account, for example, that new statements provide a new opportunity for one to come to the attention of quote miners in the first place, or that different statements might be more or less easy to seed a smear campaign; ammunition can vary in type and accessibility, so that adding more can increase the convenience of a hatchet job. It might turn out, after weighing the costs and benefits, that speaking honestly is the right decision. But one can’t know that on the strength of a convenient deontological argument that doesn’t consider those costs. Similarly:
“I’ve already pirated so much stuff I’d be screwed if I got caught. Maybe it was unwise and impulsive at first, but by now I’m past the point of no return.”
This again fails to take into account the increased risk of one’s deeds coming to attention; if most prosecutions are caused by (even if not purely about) offences shortly before the prosecution, and you expect to pirate long into the future, then your position now is the same as when you first pirated; if it was unwise then, then it’s unwise now.
The common fallacy in all these cases is that one looks at only the extreme possibilities, and throws out the inconvenient, ambiguous cases. This results in a disconnected space of possibilities that is engineered to allow one to prove a convenient conclusion. For example, the Seating Fallacy throws out the possibility that there are people who mind but also matter; the Fundamental Fallacy of Dating prematurely rules out people who are dynamically inconsistent or are imperfect introspectors, or who have uncertainty over preferences; PR rationalization fails to consider marginal effects and quantify risks in favour of a lossy binary approach.
What are other examples of situations where people (or Less Wrongers specifically) might fall prey to this failure mode?
This is the public group instrumental rationality diary for September 1-15.
It's a place to record and chat about it if you have done, or are actively doing, things like:
- Established a useful new habit
- Obtained new evidence that made you change your mind about some belief
- Decided to behave in a different way in some set of situations
- Optimized some part of a common routine or cached behavior
- Consciously changed your emotions or affect with respect to something
- Consciously pursued new valuable information about something that could make a big difference in your life
- Learned something new about your beliefs, behavior, or life that surprised you
- Tried doing any of the above and failed
Or anything else interesting which you want to share, so that other people can think about it, and perhaps be inspired to take action themselves. Try to include enough details so that everyone can use each other's experiences to learn about what tends to work out, and what doesn't tend to work out.
Thanks to cata for starting the Group Rationality Diary posts, and to commenters for participating.
Previous diary: August 16-31
A well-known brainteaser asks about the truth of the statement "this statement is false". If the statement is true, then the sentence must be false, but if it false then the sentence must be true. This paradox, far from being just a game, illustrates a question fundamental to understanding the nature of truth itself.
A number of different solutions have been proposed to this paradox (and the closely related Epimenides paradox, Pinocchio paradox). One approach is to reject the principal of bivalence - that every proposition must be true or false - and argue that this statement is neither true nor false. Unfortunately, this approach fails to resolve the truth of "this statement is not true". A second approach called Dialetheism is to argue that it should be both true and false, but this fails on "this statement is only false".
Arthur Prior's resolution it to claim that each statement implicitly asserts its own truth, so that "this statement is false" becomes "this statement is false and this statement is true". This later statement is clearly false. There do appear to be some advantages to constructing a system where each statement asserts its own truth, but the normative claim that truth should always be constructed in this manner seems to be hard to justify.
Another solution (non-cognitivism) is to deny that these statement have any truth content at all, similar to meaningless statements ("Are you a?") or non-propositional statements like commands ("Get me some milk?"). If we take this approach, then a natural question is "Which statements are meaningless?" One answer is to exclude all statements that are self referential. However, there are a few paradoxes that complicate this. One is the Card paradox where the front says that the sentence on the back is true and the back says that the sentence on the front is false. Another is Quine's paradox - ""Yields falsehood when preceded by its quotation" yields falsehood when preceded by its quotation". One other common example is: "The statement on the blackboard in Carslaw Room 201 is false". The Card paradox and blackboard paradox are interesting in that if we declare the Liar paradox to be meaningless, these paradoxes are meaningless or meaningful depending on the state of the world.
This problem has been previously discussed on Less Wrong, but I think that there is more that is worth being said on this topic. Cousin_it noted that the formalist school of philosophy (in maths) believes that "meaningful questions have to be phrased in terms of finite computational processes". Yvain took a similar approach arguing that "you can't use a truth-function to evaluate the truth of a noun until you unpack the noun into a sentence" and that it would require infinite unpacking to evaluate, while "This sentence is in English" would only require a single unpacking.
I'll take a similar approach, but I'll be exploring the notion of truth as a constructed concept. First I'll note that there are at least two different kinds of truth - truth of statements about the world and truth of mathematical concepts. These two kinds of truth are about completely different kinds of objects. The first are true if part of world is in a particular configuration and satisfy bivalence because the world is either in that configuration or not in that configuration.
The second is a constructed system where certain basic axioms start off in the class of true formulas and we have rules of deduction to allow us to add more formulas into this class or to determine that formulas aren't in the class. One particularly interesting class of axiomatic systems has the following deductive rules:
if x is in the true class, then not x is in the false class
if x is in the false class, then not x is in the true class
if not x is in the true class, then x is in the false class
if not x is in the false class, then x is in the true class
If we start with certain primitive propositions defined as true or false and start adding operations like "AND", "OR", "NOT", ect. then we get propositional logic. If we define variables and predicates (functions from variables to boolean values) and "FOR EACH", "THERE EXISTS", ect, then we get first-order predicate logic and later higher order predicate logics. These logics work with the two given deductive rules and avoid a situation where both x and not x are in the true class which would for any non-trivial classical logic lead to all formulas being in the true class, which would not be a useful system.
The system has a binary notion of truth which satisfies the law of excluded model because it was constructed in this manner. Mathematical truth does not exist in its own right, in only exists within a system of logic. Geometry, arithmetic and set theory can all be modelled within the same set-theoretic logic which has the same rules related to truth. But this doesn't mean that truth is a set-theoretic concept - set-theory is only one possible way of modelling these systems which then lets us combine objects from these different domains into the one proposition. Set-theory simply shows us being within the true or false class has similar effects across multiple systems. This explains why we believe that mathematical truth exists - leaving us with no reason to suppose that this kind of "truth" has an inherent meaning. These aren't models of the truth, "truth" is really just a set of useful models with similar properties.
Once we realise this, these paradoxes completely dissolve. What is the truth value of "This statement is false"? Is it Arthur Prior's solution where he infers that the statement asserts its own truth? Is it invalid because of infinite recursion? Is it both true and false? These questions all miss the point. We define a system that puts statements into the true class, false class or whatever other classes that we want. There is no reason to assume that there is one necessarily best way of determining the truth of the statement. The value of this solution is that this dissolves the paradox without philosophically committing ourselves to formalism or Arthur Prior's notion of truth or Dialetheism or any other such system that would be difficult to justify as being "the true solution". Instead we simply have a choice of which system we wish to construct.
I have also seen a few mentions of Tarski's type hierarchies and Kripke's fixed point theory of truth as resolving the paradox. I can't comment too much because I haven't had time to learn these yet. However, the point of this post is to resolve the paradox without committing us to a specific model of truth, as opposed to the general notion of truth as a construct.
Edit: I removed the discussion of "This statement is true" as it was incorrect (thanks to Manfred). The proper example was, "This statement is either true or false". If it is true, then that works. If it is false, then there is a contradiction. So is it true or is it meaningless given that it doesn't seem to refer to anything? This depends on how we define truth. We can either define truth only for statements that can be unpacked or we can define it for statements that have a single stable value allocation. Either version of truth could work.
This is the monthly thread for posting media of various types that you've found that you enjoy. Post what you're reading, listening to, watching, and your opinion of it. Post recommendations to blogs. Post whatever media you feel like discussing! To see previous recommendations, check out the older threads.
- Please avoid downvoting recommendations just because you don't personally like the recommended material; remember that liking is a two-place word. If you can point out a specific flaw in a person's recommendation, consider posting a comment to that effect.
- If you want to post something that (you know) has been recommended before, but have another recommendation to add, please link to the original, so that the reader has both recommendations.
- Please post only under one of the already created subthreads, and never directly under the parent media thread.
- Use the "Other Media" thread if you believe the piece of media you want to discuss doesn't fit under any of the established categories.
- Use the "Meta" thread if you want to discuss about the monthly media thread itself (e.g. to propose adding/removing/splitting/merging subthreads, or to discuss the type of content properly belonging to each subthread) or for any other question or issue you may have about the thread or the rules.
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one.
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
It shows how easy a population can be influenced if control over a small sub-set exists.
A key problem for viral marketers is to determine an initial "seed" set [<1% of total size] in a network such that if given a property then the entire network adopts the behavior. Here we introduce a method for quickly finding seed sets that scales to very large networks. Our approach finds a set of nodes that guarantees spreading to the entire network under the tipping model. After experimentally evaluating 31 real-world networks, we found that our approach often finds such sets that are several orders of magnitude smaller than the population size. Our approach also scales well - on a Friendster social network consisting of 5.6 million nodes and 28 million edges we found a seed sets in under 3.6 hours. We also find that highly clustered local neighborhoods and dense network-wide community structure together suppress the ability of a trend to spread under the tipping model.
This is relevant for LW because
a) Rational agents should hedge against this.
b) An UFAI could exploit this.
c) It gives hints to proof systems against this 'exploit'.
For about four years I am struggling to write a series of articles presenting few of my ideas. While this "philosophy" (I'd rather avoid being too pompous about it) is still developing, there is a bunch of stuff of which I have a clear image in my mind. It is a framework for model building, with some possible applications for AI developement, paradox resolving, semantics. Not any serious impact, but I do believe it would prove useful.
I tried making notes or plans for articles several times, but every time I was discouraged by those problems:
- presented concept is too obvious
- presented concept is superflous
- presented concept needs more basic ideas to be introduced beforehand
So the core problem is that to show applications of the theory (or generally more interesing results), more basic concepts must be introduced first. Yet presenting the basics seems boring and uninsightful without the application side. This seems to characterise many complex ideas.
Can you provide me with any practical tips as how to tackle this problem?
View more: Next