Filter This week

Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Two Growth Curves

25 AnnaSalamon 02 October 2015 12:59AM

Sometimes, it helps to take a model that part of you already believes, and to make a visual image of your model so that more of you can see it.

One of my all-time favorite examples of this: 

I used to often hesitate to ask dumb questions, to publicly try skills I was likely to be bad at, or to visibly/loudly put forward my best guesses in areas where others knew more than me.

I was also frustrated with this hesitation, because I could feel it hampering my skill growth.  So I would try to convince myself not to care about what people thought of me.  But that didn't work very well, partly because what folks think of me is in fact somewhat useful/important.

Then, I got out a piece of paper and drew how I expected the growth curves to go.

In blue, I drew the apparent-coolness level that I could achieve if I stuck with the "try to look good" strategy.  In brown, I drew the apparent-coolness level I'd have if I instead made mistakes as quickly and loudly as possible -- I'd look worse at first, but then I'd learn faster, eventually overtaking the blue line.

Suddenly, instead of pitting my desire to become smart against my desire to look good, I could pit my desire to look good now against my desire to look good in the future :)

I return to this image of two growth curves often when I'm faced with an apparent tradeoff between substance and short-term appearances.  (E.g., I used to often find myself scurrying to get work done, or to look productive / not-horribly-behind today, rather than trying to build the biggest chunks of capital for tomorrow.  I would picture these growth curves.)

Experiment: Changing minds vs. preaching to the choir

10 cleonid 03 October 2015 11:27AM


      1. Problem

In the market economy production is driven by monetary incentives – higher reward for an economic activity makes more people willing to engage in it. Internet forums follow the same principle but with a different currency - instead of money the main incentive of internet commenters is the reaction of their audience. A strong reaction expressed by a large number of replies or “likes” encourages commenters to increase their output. Its absence motivates them to quit posting or change their writing style.

On neutral topics, using audience reaction as an incentive works reasonably well: attention focuses on the most interesting or entertaining comments. However, on partisan issues, such incentives become counterproductive. Political forums and newspaper comment sections demonstrate the same patterns:

  • The easiest way to maximize “likes” for a given amount of effort is by posting an emotionally charged comment which appeals to audience’s biases (“preaching to the choir”).


  • The easiest way to maximize the number of replies is by posting a low quality comment that goes against audience’s biases (“trolling”).


  • Both effects are amplified when the website places comments with most replies or “likes” at the top of the page.


The problem is not restricted to low-brow political forums. The following graph, which shows the average number of comments as a function of an article’s karma, was generated from the Lesswrong data.


The data suggests that the easiest way to maximize the number of replies is to write posts that are disliked by most readers. For instance, articles with the karma of -1 on average generate twice as many comments (20.1±3.4) as articles with the karma of +1 (9.3±0.8).

2. Technical Solution

Enabling constructive discussion between people with different ideologies requires reversing the incentives – people need to be motivated to write posts that sound persuasive to the opposite side rather than to their own supporters.

We suggest addressing this problem that this problem by changing the voting system. In brief, instead of votes from all readers, comment ratings and position on the page should be based on votes from the opposite side only. For example, in the debate on minimum wage, for arguments against minimum wage only the upvotes of minimum wage supporters would be counted and vice versa.

The new voting system can simultaneously achieve several objectives:

·         eliminate incentives for preaching to the choir

·         give posters a more objective feedback on the impact of their contributions, helping them improve their writing style

·     focus readers’ attention on comments most likely to change their minds instead of inciting comments that provoke an irrational defensive reaction.

3. Testing

If you are interested in measuring and improving your persuasive skills and would like to help others to do the same, you are invited to take part in the following experiment:


Step I. Submit Pro or Con arguments on any of the following topics (up to 3 arguments in total):

     Should the government give all parents vouchers for private school tuition?

     Should developed countries increase the number of immigrants they receive?

     Should there be a government mandated minimum wage?


Step II. For each argument you have submitted, rate 15 arguments submitted by others.


Step III.  Participants will be emailed the results of the experiment including:

-         ratings their arguments receive from different reviewer groups (supporters, opponents and neutrals)

-         the list of the most persuasive Pro & Con arguments on each topic (i.e. arguments that received the highest ratings from opposing and neutral groups)

-         rating distribution in each group


Step IV (optional). If interested, sign up for the next round.


The experiment will help us test the effectiveness of the new voting system and develop the best format for its application.





The Trolley Problem and Reversibility

7 casebash 30 September 2015 04:06AM

The most famous problem used when discussing consequentialism is that of the tram problem. A tram is hurtling towards the 5 people on the track, but if you flick a switch it will change tracks and kill only the one person instead. Utilitarians would say that you should flick the switch as it is better for there to be a single death than five. Some deontologists might agree with this, however, much more would object and argue that you don’t have the right to make that decision. This problem has different variations, such as one where you push someone in front of the train instead of them being on the track, but we’ll consider this one, as if it is accepted then it moves you a large way towards utilitarianism.

Let’s suppose that someone flicks the switch, but then realises the other side was actually correct and that they shouldn’t have flicked it. Do they now have an obligation to flick the switch back? What is interesting is that if they had just walked into the room and the train was heading towards the one person, they would have had an obligation *not* to flick the switch, but, having flicked it, it seems that they have an obligation to flick it back the other way.

Where this gets more puzzling is when we imagine Bob having observed Aaron flicking the switch? Arguably, if Aaron had no right to flick the switch, then Bob would have obligation to flick it back (or, if not an obligation, this would surely count as a moral good?). It is hard to argue against this conclusion, assuming that there is a strong moral obligation for Aaron not to flick the switch, along the lines of “Do not kill”. This logic seems consistent with how we act in other situations; if someone had tried to kill someone or steal something important from them; then most people would reverse or prevent the action if they could. 

But what if Aaron reveals that he was only flicking the switch because Cameron had flicked it first? Then Bob would be obligated to leave it alone, as Aaron would be doing what Bob was planning to do: prevent interference. We can also complicate it by imagining that a strong gust of wind was about to come and flick the switch, but Bob flicked it first. Is there now a duty to undo Bob's flick of the switch or does that fact that the switch was going to flick anyway abrogate that duty? This obligation to trace back the history seems very strange indeed. I can’t see any pathway to find a logical contradiction, but I can’t imagine that many people would defend this state of affairs.

But perhaps the key principle here is non-interference. When Aaron flicks the switch, he has interfered and so he arguably has the limited right to undo his interference. But when Bob decides to reverse this, perhaps this counts as interference also. So while Bob receives credit for preventing Aaron’s interference, this is outweighed by committing interference himself - acts are generally considered more important than omissions. This would lead to Bob being required to take no action, as there wouldn’t be any morally acceptable pathway with which to take action.

I’m not sure I find this line of thought convincing. If we don’t want anyone interfering with the situation, couldn’t we lock the switch in place before anyone (including Aaron) gets the chance or even the notion to interfere? It would seem rather strange to argue that we have to leave the door open to interference even before we know anyone is planning to do so. Next suppose that we don’t have glue, but we can install a mechanism that will flick the switch back if anyone tries to flick it. Principally, this doesn’t seem any different from installing glue.

Next, suppose we don’t have a machine to flick it back, so instead we install Bob. It seems that installing Bob is just as moral as installing an actual mechanism. It would seem rather strange to argue that “installing” Bob is moral, but any action he takes is immoral. There might be cases where “installing” someone is moral, but certain actions they take will be immoral. One example would be “installing” a policeman to enforce a law that is imperfect. We can expect the decision to hire the policeman to be moral if the law is general good, but, in certain circumstances, flaws in this law might make enforcement immoral. But here, we are imagining that *any* action Bob takes is immoral interference. It therefore seems strange to suggest that installing him could somehow be moral and so this line of thought seems to lead to a contradiction.

We consider one last situation: that we aren't allowed to interfere and that setting up a mechanism to stop interference also counts as interference. We first imagine that Obama has ordered a drone attack that is going to kill a (robot, just go with it) terrorist. He knows that the drone attack will cause collateral damage, but it will also prevent the terrorist from killing many more people on American soil. He wakes up the next morning and realises that he was wrong to violate the deontological principles, so he calls off the attack. Are there any deotologists who would argue that he doesn’t have the right to rescind his order? Rescinding the order does not seem to count as "further interference", instead it seems to count as "preventing his interference from occurring". Flicking the switch back seems functionally identical to rescinding the order. The train hasn’t hit the intersection; so there isn’t any casual entanglement, so it seems like flicking the switch is best characterised as preventing the interference from occurring. If we want to make the scenarios even more similar, we can imagine that flicking the switch doesn't force the train to go down one track or another, but instead orders the driver to take one particular track. It doesn't seem like changing this aspect of the problem should alter the morality at all.

This post has shown that deontological objections to the Trolley Problem tend to lead to non-obvious philosophical commitments that are not very well known. I didn't write this post so much as to try to show that deontology is wrong, as to start as conversation and help deontologists understand and refine their commitments better.

I also wanted to include one paragraph I wrote in the comments: Let's assume that the train will arrive at the intersection in five minutes. If you pull the lever one way, then pull it back the other, you'll save someone from losing their job. There is no chance that the lever will get stuck out that you won't be able to complete the operation on trying. Clearly pulling the lever, then pulling it back is superior to not touching it. This seems to indicate that the sin isn't pulling the lever, but pulling it without the intent to pull it back. If the sin is pulling it without intent to pull it back, then it would seem very strange that gaining the intent to pull it back, then pulling it back would be a sin.

A few misconceptions surrounding Roko's basilisk

6 RobbBB 05 October 2015 09:23PM

There's a new LWW page on the Roko's basilisk thought experiment, discussing both Roko's original post and the fallout that came out of Eliezer Yudkowsky banning the topic on Less Wrong discussion threads. The wiki page, I hope, will reduce how much people have to rely on speculation or reconstruction to make sense of the arguments.

While I'm on this topic, I want to highlight points that I see omitted or misunderstood in some online discussions of Roko's basilisk. The first point that people writing about Roko's post often neglect is:


  • Roko's arguments were originally posted to Less Wrong, but they weren't generally accepted by other Less Wrong users.

Less Wrong is a community blog, and anyone who has a few karma points can post their own content here. Having your post show up on Less Wrong doesn't require that anyone else endorse it. Roko's basic points were promptly rejected by other commenters on Less Wrong, and as ideas not much seems to have come of them. People who bring up the basilisk on other sites don't seem to be super interested in the specific claims Roko made either; discussions tend to gravitate toward various older ideas that Roko cited (e.g., timeless decision theory (TDT) and coherent extrapolated volition (CEV)) or toward Eliezer's controversial moderation action.

In July 2014, David Auerbach wrote a Slate piece criticizing Less Wrong users and describing them as "freaked out by Roko's Basilisk." Auerbach wrote, "Believing in Roko’s Basilisk may simply be a 'referendum on autism'" — which I take to mean he thinks a significant number of Less Wrong users accept Roko’s reasoning, and they do so because they’re autistic (!). But the Auerbach piece glosses over the question of how many Less Wrong users (if any) in fact believe in Roko’s basilisk. Which seems somewhat relevant to his argument...?

The idea that Roko's thought experiment holds sway over some community or subculture seems to be part of a mythology that’s grown out of attempts to reconstruct the original chain of events; and a big part of the blame for that mythology's existence lies on Less Wrong's moderation policies. Because the discussion topic was banned for several years, Less Wrong users themselves had little opportunity to explain their views or address misconceptions. A stew of rumors and partly-understood forum logs then congealed into the attempts by people on RationalWiki, Slate, etc. to make sense of what had happened.

I gather that the main reason people thought Less Wrong users were worried about Roko's argument was that Eliezer deleted Roko's post and banned further discussion of the topic. Eliezer has since sketched out his thought process on Reddit:

When Roko posted about the Basilisk, I very foolishly yelled at him, called him an idiot, and then deleted the post. [...] Why I yelled at Roko: Because I was caught flatfooted in surprise, because I was indignant to the point of genuine emotional shock, at the concept that somebody who thought they'd invented a brilliant idea that would cause future AIs to torture people who had the thought, had promptly posted it to the public Internet. In the course of yelling at Roko to explain why this was a bad thing, I made the further error---keeping in mind that I had absolutely no idea that any of this would ever blow up the way it did, if I had I would obviously have kept my fingers quiescent---of not making it absolutely clear using lengthy disclaimers that my yelling did not mean that I believed Roko was right about CEV-based agents [= Eliezer’s early model of indirectly normative agents that reason with ideal aggregated preferences] torturing people who had heard about Roko's idea. [...] What I considered to be obvious common sense was that you did not spread potential information hazards because it would be a crappy thing to do to someone. The problem wasn't Roko's post itself, about CEV, being correct.

This, obviously, was a bad strategy on Eliezer's part. Looking at the options in hindsight: To the extent it seemed plausible that Roko's argument could be modified and repaired, Eliezer shouldn't have used Roko's post as a teaching moment and loudly chastised him on a public discussion thread. To the extent this didn't seem plausible (or ceased to seem plausible after a bit more analysis), continuing to ban the topic was a (demonstrably) ineffective way to communicate the general importance of handling real information hazards with care.


On that note, point number two:

  • Roko's argument wasn’t an attempt to get people to donate to Friendly AI (FAI) research. In fact, the opposite is true.

Roko's original argument was not 'the AI agent will torture you if you don't donate, therefore you should help build such an agent'; his argument was 'the AI agent will torture you if you don't donate, therefore we should avoid ever building such an agent.' As Gerard noted in the ensuing discussion thread, threats of torture "would motivate people to form a bloodthirsty pitchfork-wielding mob storming the gates of SIAI [= MIRI] rather than contribute more money." To which Roko replied: "Right, and I am on the side of the mob with pitchforks. I think it would be a good idea to change the current proposed FAI content from CEV to something that can't use negative incentives on x-risk reducers."

Roko saw his own argument as a strike against building the kind of software agent Eliezer had in mind. Other Less Wrong users, meanwhile, rejected Roko's argument both as a reason to oppose AI safety efforts and as a reason to support AI safety efforts.

Roko's argument was fairly dense, and it continued into the discussion thread. I’m guessing that this (in combination with the temptation to round off weird ideas to the nearest religious trope, plus misunderstanding #1 above) is why RationalWiki's version of Roko’s basilisk gets introduced as

a futurist version of Pascal’s wager; an argument used to try and suggest people should subscribe to particular singularitarian ideas, or even donate money to them, by weighing up the prospect of punishment versus reward.

If I'm correctly reconstructing the sequence of events: Sites like RationalWiki report in the passive voice that the basilisk is "an argument used" for this purpose, yet no examples ever get cited of someone actually using Roko’s argument in this way. Via citogenesis, the claim then gets incorporated into other sites' reporting.

(E.g., in Outer Places: "Roko is claiming that we should all be working to appease an omnipotent AI, even though we have no idea if it will ever exist, simply because the consequences of defying it would be so great." Or in Business Insider: "So, the moral of this story: You better help the robots make the world a better place, because if the robots find out you didn’t help make the world a better place, then they’re going to kill you for preventing them from making the world a better place.")

In terms of argument structure, the confusion is equating the conditional statement 'P implies Q' with the argument 'P; therefore Q.' Someone asserting the conditional isn’t necessarily arguing for Q; they may be arguing against P (based on the premise that Q is false), or they may be agnostic between those two possibilities. And misreporting about which argument was made (or who made it) is kind of a big deal in this case: 'Bob used a bad philosophy argument to try to extort money from people' is a much more serious charge than 'Bob owns a blog where someone once posted a bad philosophy argument.'



  • "Formally speaking, what is correct decision-making?" is an important open question in philosophy and computer science, and formalizing precommitment is an important part of that question.

Moving past Roko's argument itself, a number of discussions of this topic risk misrepresenting the debate's genre. Articles on Slate and RationalWiki strike an informal tone, and that tone can be useful for getting people thinking about interesting science/philosophy debates. On the other hand, if you're going to dismiss a question as unimportant or weird, it's important not to give the impression that working decision theorists are similarly dismissive.

What if your devastating take-down of string theory is intended for consumption by people who have never heard of 'string theory' before? Even if you're sure string theory is hogwash, then, you should be wary of giving the impression that the only people discussing string theory are the commenters on a recreational physics forum. Good reporting by non-professionals, whether or not they take an editorial stance on the topic, should make it obvious that there's academic disagreement about which approach to Newcomblike problems is the right one. The same holds for disagreement about topics like long-term AI risk or machine ethics.

If Roko's original post is of any pedagogical use, it's as an unsuccessful but imaginative stab at drawing out the diverging consequences of our current theories of rationality and goal-directed behavior. Good resources for these issues (both for discussion on Less Wrong and elsewhere) include:

The Roko's basilisk ban isn't in effect anymore, so you're welcome to direct people here (or to the Roko's basilisk wiki page, which also briefly introduces the relevant issues in decision theory) if they ask about it. Particularly low-quality discussions can still get deleted (or politely discouraged), though, at moderators' discretion. If anything here was unclear, you can ask more questions in the comments below.

Digital Immortality Map: How to collect enough information about yourself for future resurrection by AI

5 turchin 02 October 2015 10:21PM

If someone has died it doesn’t mean that you should stop trying to return him to life. There is one clear thing that you should do (after cryonics): collect as much information about the person as possible, as well as store his DNA sample, and hope that future AI will return him to life based on this information.


Two meanings of “Digital immortality”

The term “Digital immortality” is often confused with the notion of mind uploading, as the end result is almost the same: a simulated brain in a computer.

But here, by the term “Digital immortality” I mean reconstruction of the person based on his digital footprint and other traces by future AI after this person death.

Mind uploading in the future will happen while the original is still alive (or while the brain exists in a frozen state) and will be connected to a computer by some kind of sophisticated interface, or the brain will be scanned. It cannot be done currently. 

On the other hand, reconstruction based on traces will be done by future AI. So we just need to leave enough traces and we could do it now.

But we don’t know how much traces are enough, so basically we should try to produce and preserve as many traces as possible. However, not all traces are equal in their predictive value. Some are almost random, and others are so common that they do not provide any new information about the person.


Cheapest way to immortality

Creating traces is an affordable way of reaching immortality. It could even be done for another person after his death, if we start to collect all possible information about him. 

Basically I am surprised that people don’t do it all the time. It could be done in a simple form almost for free and in the background – just start a video recording app on your notebook, and record everything into shared folder connected with a free cloud. (Evocam program for Mac is excellent, and provides up 100gb free).

But really good digital immortality require 2-3 month commitment for self-description with regular every year updates. It may also require maximum several thousand dollars investment in durable disks, DNA testing, videorecorders, and free time to do it.

I understand how to set up this process and could help anyone interested.



The idea of personal identity is outside the scope of this map. I have another map on this topic (now in draft), I assume that the problem of personal identity will be solved in the future. Perhaps we will prove that information only is enough to solve the problem, or we will find that continuity of consciousness, but we will be able to construct mechanisms to transfer this identity independently of information. 

Digital immortality requires a very weak notion of identity. i.e. a model of behavior and thought processes is enough for an identity. This model may have some differences from the original, which I call “one night difference”, that is the typical difference between me-yesterday and me-today after one night's sleep. The meaningful part of this information has size from several megabytes to gigabits, but we may need to collect much more information as we can’t now extract meaningful part from random.

DI may also be based on even weaker notion of identity, that anyone who thinks that he is me, is me. Weaker notions of identity require less information to be preserved, and in last case it may be around 10K bytes (including name, indexical information and basic traits description)

But the question about the number of traces needed to create an almost exact model of a personality is still open. It also depends on predictive power of future AI: the stronger is AI, the less traces are enough.

Digital immortality is plan C in my Immortality Roadmap, where Plan A is life extension and Plan B is cryonics; it is not plan A, because it requires solving the identity problem plus the existence of powerful future AI.



I created my first version of it in the year 1990 when I was 16, immediately after I had finished school. It included association tables, drawings and lists of all people known to me, as well as some art, memoires, audiorecordings and encyclopedia od everyday objects around me.

There are several approaches to achieving digital immortality. The most popular one is passive that is simply videorecording of everything you do.

My idea was that a person can actively describe himself from inside. He may find and declare the most important facts about himself. He may run specific tests that will reveal hidden levels of his mind and sub consciousness. He can write a diary and memoirs. That is why I called my digital immortality project “self-description”.


Structure of the map

This map consists of two parts: theoretical and practical. The theoretical part lists basic assumptions and several possible approaches to reconstructing an individual, in which he is considered as a black box. If real neuron actions will become observable, the "box" will become transparent and real uploading will be possible.

There are several steps in the practical part:

- The first step includes all the methods of fixing information while the person of interest is alive.

- The second step is about preservation of the information.

- The third step is about what should be done to improve and promote the process.

- The final fourth step is about the reconstruction of the individual, which will be performed by AI after his death. In fact it may happen soon, may be in next 20-50 years.

There are several unknowns in DI, including the identity problem, the size and type of information required to create an exact model of the person, and the required power of future AI to operate the process. These and other problems are listed in the box on the right corner of the map.

The pdf of the map is here, and jpg is below.


Previous posts with maps:

Doomsday Argument Map

AGI Safety Solutions Map

A map: AI failures modes and levels

A Roadmap: How to Survive the End of the Universe

A map: Typology of human extinction risks

Roadmap: Plan of Action to Prevent Human Extinction Risks

Immortality Roadmap













The application of the secretary problem to real life dating

5 Elo 29 September 2015 10:28PM

The following problem is best when not described by me:

Although there are many variations, the basic problem can be stated as follows:


There is a single secretarial position to fill.

There are n applicants for the position, and the value of n is known.

The applicants, if seen altogether, can be ranked from best to worst unambiguously.

The applicants are interviewed sequentially in random order, with each order being equally likely.

Immediately after an interview, the interviewed applicant is either accepted or rejected, and the decision is irrevocable.

The decision to accept or reject an applicant can be based only on the relative ranks of the applicants interviewed so far.

The objective of the general solution is to have the highest probability of selecting the best applicant of the whole group. This is the same as maximizing the expected payoff, with payoff defined to be one for the best applicant and zero otherwise.




After reading that you can probably see the application to real life.  There are a series of bad and good assumptions following, some are fair, some are not going to be representative of you.  I am going to try to name them all as I go so that you can adapt them with better ones for yourself.  Assuming that you plan to have children and you will probably be doing so like billions of humans have done so far in a monogamous relationship while married (the entire set of assumptions does not break down for poly relationships or relationship-anarchy, but it gets more complicated).  These assumptions help us populate the Secretary problem with numbers in relation to dating for the purpose of children.


If you assume that a biological female's clock ends at 40. (in that its hard and not healthy for the baby if you try to have a kid past that age), that is effectively the end of the pure and simple biological purpose of relationships. (environment, IVF and adoption aside for a moment).  (yes there are a few more years on that)


For the purpose of this exercise – as a guy – you can add a few years for the potential age gap you would tolerate. (i.e. my parents are 7 years apart, but that seems like a big understanding and maturity gap – they don't even like the same music), I personally expect I could tolerate an age gap of 4-5 years.

If you make the assumption that you start your dating life around the ages of 16-18. that gives you about [40-18=22]  22-24 (+5 for me as a male), years of expected dating potential time.

If you estimate the number of kids you want to have, and count either:

3 years for each kid OR

2 years for each kid (+1 kid – AKA 2 years)

(Twins will throw this number off, but estimate that they take longer to recover from, or more time raising them to manageable age before you have time to have another kid)

My worked example is myself – as a child of 3, with two siblings of my own I am going to plan to have 3 children. Or 8-9 years of child-having time. If we subtract that from the number above we end up with 11-16 (16-21 for me being a male) years of dating time.

Also if you happen to know someone with a number of siblings (or children) and a family dynamic that you like; then you should consider that number of children for yourself. Remember that as a grown-up you are probably travelling through the world with your siblings beside you.  Which can be beneficial (or detrimental) as well, I would be using the known working model of yourself or the people around you to try to predict whether you will benefit or be at a disadvantage by having siblings.  As they say; You can't pick your family - for better and worse.  You can pick your friends, if you want them to be as close as a default family - that connection goes both ways - it is possible to cultivate friends that are closer than some families.  However you choose to live your life is up to you.

Assume that once you find the right person - getting married (the process of organising a wedding from the day you have the engagement rings on fingers); and falling pregnant (successfully starting a viable pregnancy) takes at least a year. Maybe two depending on how long you want to be "we just got married and we aren't having kids just yet". It looks like 9-15 (15-20 for male adjusted) years of dating.

With my 9-15 years; I estimate a good relationship of working out whether I want to marry someone, is between 6 months and 2 years, (considering as a guy I will probably be proposing and putting an engagement ring on someone's finger - I get higher say about how long this might take than my significant other does.), (This is about the time it takes to evaluate whether you should put the ring on someone's finger).  For a total of 4 serious relationships on the low and long end and 30 serious relationships on the upper end. (7-40 male adjusted relationships)

Of course that's not how real life works. Some relationships will be longer and some will be shorter. I am fairly confident that all my relationships will fall around those numbers.

I have a lucky circumstance; I have already had a few serious relationships (substitute your own numbers in here).  With my existing relationships I can estimate how long I usually spend in a relationship. (2year + 6 year + 2month + 2month /4 = 2.1 years). Which is to say that I probably have a maximum and total of around 7-15 relationships before I gotta stop expecting to have kids, or start compromising on having 3 kids.




A solution to the secretary equation

A known solution that gives you the best possible candidate the most of the time is to try out 1/e candidates (or roughly 36%), then choose the next candidate that is better than the existing candidates. For my numbers that means to go through 3-7 relationships and then choose the next relationship that is better than all the ones before.  


I don't quite like that.  It depends on how big your set is; as to what the chance of you having the best candidate in the first 1/e trials and then sticking it out till the last candidate, and settling on them.  (this strategy has a ((1/n)*(1/e)) chance of just giving you the last person in the set - which is another opportunity cost risk - what if they are rubbish? Compromise on the age gap, the number of kids or the partners quality...)  If the set is 7, the chance that the best candidate is in the first 1/e is 5.26% (if the set is 15 - the chance is much lower at 2.45%).  


Opportunity cost

Each further relationship you have might be costing you another 2 years to get further out of touch with the next generation (kids these days!)  I tend to think about how old I will be when my kids are 15-20 am I growing rapidly out of touch with the next younger generation?  Two years is a very big opportunity spend - another 2 years could see you successfully running a startup and achieving lifelong stability at the cost of the opportunity to have another kid.  I don't say this to crush you with fear of inaction; but it should factor in along with other details of your situation.


A solution to the risk of having the best candidate in your test phase; or to the risk of lost opportunity - is to lower the bar; instead of choosing the next candidate that is better than all the other candidates; choose the next candidate that is better than 90% of the candidates so far.  Incidentally this probably happens in real life quite often.  In a stroke of, "you'll do"...


Where it breaks down


Real life is more complicated than that. I would like to think that subsequent relationships that I get into will already not suffer the stupid mistakes of the last ones; As well as the potential opportunity cost of exploration. The more time you spend looking for different partners – you might lose your early soul mate, or might waste time looking for a better one when you can follow a "good enough" policy. No one likes to know they are "good enough", but we do race the clock in our lifetimes. Life is what happens when you are busy making plans.


As someone with experience will know - we probably test and rule out bad partners in a single conversation, where we don't even get so far as a date.  Or don't last more than a week. (I. E the experience set is growing through various means).


People have a tendency to overrate the quality of a relationship while they are in it, versus the ones that already failed.


Did I do something wrong? 

“I got married early - did I do something wrong (or irrational)?”

No.  equations are not real life.  It might have been nice to have the equation, but you obviously didn't need it.  Also this equation assumes a monogamous relationship.  In real life people have overlapping relationships, you can date a few people and you can be poly. These are all factors that can change the simple assumptions of the equation. 


Where does the equation stop working?

Real life is hard.  It doesn't fall neatly into line, it’s complicated, it’s ugly, it’s rough and smooth and clunky.  But people still get by.  Don’t be afraid to break the rule. 

Disclaimer: If this equation is the only thing you are using to evaluate a relationship - it’s not going to go very well for you.  I consider this and many other techniques as part of my toolbox for evaluating decisions.

Should I break up with my partner?

What? no!  Following an equation is not a good reason to live your life.  

Does your partner make you miserable?  Then yes you should break up.


Do you feel like they are not ready to have kids yet and you want to settle down?  Tough call.  Even if they were agents also doing the equation; An equation is not real life.  Go by your brain; go by your gut.  Don’t go by just one equation.

Expect another post soon about reasonable considerations that should be made when evaluating relationships.

The given problem makes the assumption that you are able to evaluate partners in the sense that the secretary problem expects.  Humans are not all strategic and can’t really do that.  This is why the world is not going to perfectly follow this equation.  Life is complicated; there are several metrics that make a good partner and they don’t always trade off between one another.



Meta: writing time - 3 hours over a week; 5+ conversations with people about the idea, bothering a handful of programmers and mathematicians for commentary on my thoughts, and generally a whole bunch of fun talking about it.  This post was started on the slack channel when someone asked a related question.


My table of contents for other posts in my series.


Let me know if this post was helpful or if it worked for you or why not.

[Link] Tetlock on the power of precise predictions to counter political polarization

4 Stefan_Schubert 04 October 2015 03:19PM

The prediction expert Philip Tetlock writes in New York Times on the power of precise predictions to counter political polarization. Note the similarity to Robin Hanson's futarchy idea.

IS there a solution to this country’s polarized politics?

Consider the debate over the nuclear deal with Iran, which was one of the nastiest foreign policy fights in recent memory. There was apocalyptic rhetoric, multimillion-dollar lobbying on both sides and a near-party-line Senate vote. But in another respect, the dispute was hardly unique: Like all policy debates, it was, at its core, a contest between competing predictions.

Opponents of the deal predicted that the agreement would not prevent Iran from getting the bomb, would put Israel at greater risk and would further destabilize the region. The deal’s supporters forecast that it would stop (or at least delay) Iran from fielding a nuclear weapon, would increase security for the United States and Israel and would underscore American leadership.

The problem with such predictions is that it is difficult to square them with objective reality. Why? Because few of them are specific enough to be testable. Key terms are left vague and undefined. (What exactly does “underscore leadership” mean?) Hedge words like “might” or “could” are deployed freely. And forecasts frequently fail to include precise dates or time frames. Even the most emphatic declarations — like former Vice President Dick Cheney’s prediction that the deal “will lead to a nuclear-armed Iran” — can be too open-ended to disconfirm.


Non-falsifiable predictions thus undermine the quality of our discourse. They also impede our ability to improve policy, for if we can never judge whether a prediction is good or bad, we can never discern which ways of thinking about a problem are best.

The solution is straightforward: Replace vague forecasts with testable predictions. Will the International Atomic Energy Agency report in December that Iran has adequately resolved concerns about the potential military dimensions of its nuclear program? Will Iran export or dilute its quantities of low-enriched uranium in excess of 300 kilograms by the deal’s “implementation day” early next year? Within the next six months, will any disputes over I.A.E.A. access to Iranian sites be referred to the Joint Commission for resolution?

Such questions don’t precisely get at what we want to know — namely, will the deal make the United States and its allies safer? — but they are testable and relevant to the question of the Iranian threat. Most important, they introduce accountability into forecasting. And that, it turns out, can depolarize debate.

In recent years, Professor Tetlock and collaborators have observed this depolarizing effect when conducting forecasting “tournaments” designed to identify what separates good forecasters from the rest of us. In these tournaments, run at the behest of the Intelligence Advanced Research Projects Activity (which supports research relevant to intelligence agencies), thousands of forecasters competed to answer roughly 500 questions on various national security topics, from the movement of Syrian refugees to the stability of the eurozone.

The tournaments identified a small group of people, the top 2 percent, who generated forecasts that, when averaged, beat the average of the crowd by well over 50 percent in each of the tournament’s four years. How did they do it? Like the rest of us, these “superforecasters” have political views, often strong ones. But they learned to seriously consider the possibility that they might be wrong.

What made such learning possible was the presence of accountability in the tournament: Forecasters were able see their competitors’ predictions, and that transparency reduced overconfidence and the instinct to make bold, ideologically driven predictions. If you can’t hide behind weasel words like “could” or “might,” you start constructing your predictions carefully. This makes sense: Modest forecasts are more likely to be correct than bold ones — and no one wants to look stupid.

This suggests a way to improve real-world discussion. Suppose, during the next ideologically charged policy debate, that we held a public forecasting tournament in which representatives from both sides had to make concrete predictions. (We are currently sponsoring such a tournament on the Iran deal.) Based on what we have seen in previous tournaments, this exercise would decrease the distance between the two camps. And because it would be possible to determine a “winner,” it would help us learn whether the conservative or liberal assessment of the issue was more accurate.


Either way, we would begin to emerge from our dark age of political polarization.

October 2015 Media Thread

4 ArisKatsaris 01 October 2015 10:17PM

This is the monthly thread for posting media of various types that you've found that you enjoy. Post what you're reading, listening to, watching, and your opinion of it. Post recommendations to blogs. Post whatever media you feel like discussing! To see previous recommendations, check out the older threads.


  • Please avoid downvoting recommendations just because you don't personally like the recommended material; remember that liking is a two-place word. If you can point out a specific flaw in a person's recommendation, consider posting a comment to that effect.
  • If you want to post something that (you know) has been recommended before, but have another recommendation to add, please link to the original, so that the reader has both recommendations.
  • Please post only under one of the already created subthreads, and never directly under the parent media thread.
  • Use the "Other Media" thread if you believe the piece of media you want to discuss doesn't fit under any of the established categories.
  • Use the "Meta" thread if you want to discuss about the monthly media thread itself (e.g. to propose adding/removing/splitting/merging subthreads, or to discuss the type of content properly belonging to each subthread) or for any other question or issue you may have about the thread or the rules.

Polling Thread - Tutorial

4 Gunnar_Zarncke 01 October 2015 09:47PM

After some hiatus another installment of the Polling Thread.

This is your chance to ask your multiple choice question you always wanted to throw in. Get qualified numeric feedback to your comments. Post fun polls.

Additionally this is your chance to learn to write polls. This installment is devoted to try out polls for the cautious and curious.

These are the rules:

  1. Each poll goes into its own top level comment and may be commented there.
  2. You must at least vote all polls that were posted earlier than you own. This ensures participation in all polls and also limits the total number of polls. You may of course vote without posting a poll.
  3. Your poll should include a 'don't know' option (to avoid conflict with 2). I don't know whether we need to add a troll catch option here but we will see.

If you don't know how to make a poll in a comment look at the Poll Markup Help.

This is a somewhat regular thread. If it is successful I may post again. Or you may. In that case do the following :

  • Use "Polling Thread" in the title.
  • Copy the rules.
  • Add the tag "poll".
  • Link to this Thread or a previous Thread.
  • Create a top-level comment saying 'Discussion of this thread goes here; all other top-level comments should be polls or similar'
  • Add a second top-level comment with an initial poll to start participation.

How could one (and should one) convert someone from pseudoscience?

3 Vilx- 05 October 2015 11:53AM

I've known for a long time that some people who are very close to me are somewhat inclined to believe the pseudoscience world, but it always seemed pretty benign. In their everyday lives they're pretty normal people and don't do any crazy things, so this was a topic I mostly avoided and left it at that. After all - they seemed to find psychological value in it. A sense of control over their own lives, a sense of purpose, etc.

Recently I found out however that at least one of them seriously believes Bruce Lipton, who in essence preaches that happy thoughts cure cancer. Now I'm starting to get worried...

Thus I'm wondering - what can I do about it? This is in essence a religious question. They believe this stuff with just anecdotal proof. How do I disprove it without sounding like "Your religion is wrong, convert to my religion, it's right"? Pseudoscientists are pretty good at weaving a web of lies that sound quite logical and true.

The one thing I've come up with is to somehow introduce them to classical logical fallacies. That at least doesn't directly conflict with their beliefs. But beyond that I have no idea.

And perhaps more important is the question - should I do anything about it? The pseudoscientific world is a rosy one. You're in control of your life and your body, you control random events, and most importantly - if you do everything right, it'll all be OK. Even if I succeed in crushing that illusion, I have nothing to put in its place. I'm worried that revealing just how truly bleak the reality is might devastate them. They seem to be drawing a lot of their happiness from these pseudoscientific beliefs, either directly or indirectly.

And anyway, more likely that I won't succeed but just ruin my (healthy) relationship with them. Maybe it's best just not to interfere at all? Even if they end up hurting themselves, well... it was their choice. Of course, that also means that I'll be standing idly by and allowing bullshit to propagate, which is kinda not a very good thing. However right now they are not very pushy about their beliefs, and only talk about them if the topic comes up naturally, so I guess it's not that bad.

Any thoughts?

Weekly LW Meetups

3 FrankAdamek 02 October 2015 04:22PM

This summary was posted to LW Main on September 25th. The following week's summary is here.

New meetups (or meetups with a hiatus of more than a year) are happening in:

Irregularly scheduled Less Wrong meetups are taking place in:

The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:

Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Buffalo, Cambridge UK, Canberra, Columbus, Denver, London, Madison WI, Melbourne, Moscow, Mountain View, New York, Philadelphia, Research Triangle NC, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers.

continue reading »

[Link] Differential Technology Development - Some Early Thinking

3 MattG 01 October 2015 02:08AM

This article gives a simple model to think about the positive effects of a friendly AI vs. the negative effects of an unfriendly AI, and let's you plug in certain assumptions to see if speeding up AI progress is worthwhile. Thought some of you here might be interested.

Open thread, Oct. 5 - Oct. 11, 2015

2 MrMind 05 October 2015 06:50AM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

Donna Capsella and the four applicants, pt.1

0 Romashka 02 October 2015 02:15PM

Once upon a time, in a dark, cruel world – maybe a world darker and crueller than it is – there lived a woman who wanted a piece of the action. Her name was Capsella Medik, but we remember her as Donna Capsella. This is an anecdote from her youth, told by a man who lived to tell it.'ve got to understand, Donna started small. Real small. No money, no allies, no kin, and her wiles were – as feminine as they are. Still, she was ambitious, even then, and she had to look the part.

Girl had a way with people. Here's how it went.

One night, she rents a room – one table, five chairs – and two armed bodies, and sets up a date with four men at once – Mr. Burr, Mr. Sapp, Mr. Ast and Mr. Oriss, who've never seen her before. All are single, thirty-ish white collars. One look at the guns, and they're no trouble at all.

On the table, there's a heap: a coloured picture, a box of beads, another box (empty), four stacks of paper, four pens, a calculator and a sealed envelope.

'So,' says Donna. 'I need a manager. A clever man who'd keep my bank happy while I am...abroad. I offer you to play a game – just one game – and the winner is going to sign these papers. You leave hired, or not at all.'

The game was based on Mendel's Laws – can you imagine? The police never stood a chance against her... She had it printed out – a kind of cheat-sheet. It's like, if you have some biological feature, it's either what your genes say, or you helped Nature along the way; and the exact – wording – can be different, so you have blue eyes or brown eyes. The wording is what they call allele. Some alleles, dominant, shout louder than others, recessive, so you'll have at most two copies of each gene (hopefully), but only one will ever be heard on the outside.

(It's not quite that simple, but we didn't protest. Guns, you know.)

So there was a picture of a plant whose leaves came in four shapes (made by two genes with two alleles each):


From left to right: simplex, rhomboidea, heteris and tenuis. Simplex had only recessive alleles, aabb. Rhomboidea and tenuis each had only one pair of recessive alleles – aaB? and A?bb. But heteris, that one was a puzzler: A?B?.

'Okay,' Donna waves her hand over the heap on the table. 'Here are the rules. You will see two parent plants, and then you will see their offspring – one at a time.' She shows us the box with the beads. 'Forty-eight kids total.' She begins putting some of the beads into the empty box, but we don't see which ones. 'The colours are like in the picture. You have to guess as much about the parents and the kids as you can as I go along. All betting stops when the last kid pops out. Guess wrong, even partially wrong, you lose a point, guess right, earn one. Screw around, you're out of the game. The one with the most points wins.'

'Uh,' mumbles Oriss. 'Can we, maybe, say we're not totally sure – ?..'

She smiles, and oh, those teeth. 'Yeah. Use your Bayes.'

And just like that, Oriss reaches to his stack of paper, ready to slog through all the calculations. (Oriss likes to go ahead and gamble based on some math, even if it's not rock solid yet.)

'Er,' tries Sapp. 'Do we have to share our guesses?'

'No, the others will only know that you earned or lost a point.'

And Sapp picks up his pen, but with a little frown. (He doesn't share much, does Sapp.)

'Um,' Ast breaks in. 'In a single round, do we guess simultaneously, or in some order?'

'Simultaneously. You write it down and give it to me.'

And Ast slumps down in his seat, sweating, and eyes the calculator. (Ast prefers to go where others lead, though he can change his mind lightning-fast.)

'Well,' Burr shrugs. 'I'll just follow rough heuristics, and we'll see how it goes.'

'Such as?' asks Donna, cocking her head to the side.

'As soon as there's a simplex kid, it all comes down to pure arithmetic, since we'll know both parents have at least one recessive allele for each of the genes. If both parents are heteris – and they will be, I see it in your eyes! – then the probability of at least one of them having at least one recessive allele is higher than the probability of neither having any. I can delay making guesses for a time and just learn what score the others get for theirs, since they're pretty easy to reverse-engineer – '

'What!' say Ast, Sapp and Oriss together.

'You won't get points fast enough,' Donna points out. 'You will lose.'

'I might lose. And you will hire me anyway. You need a clever man to keep your bank happy.'

Donna purses her lips.

'You haven't told anything of value, anything the others didn't know.'

'But of course,' Burr says humbly, and even the armed bodies scowl.

'You're only clever when you have someone to mooch off. I won't hire you alone.'


'Mind, I won't pick you if you lose too badly.'

Burr leers at her, and she swears under her breath.

'Enough,' says Donna and puts down two red beads – the parents – on the table.

We take our pens. She reaches out into the box of offspring.

The first bead is red.

And the second one is red.

And the third one is red.

...I tell you, it was the longest evening in my life.


So, what are your Fermi estimates for the numbers of points Mr. Burr, Mr. Sapp, Mr. Ast and Mr. Oriss each earned? And who was selected as a manager, or co-managers? And how many people left the room?

(I apologise - the follow-up won't be for a while.)