Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Open Thread, May 16-31, 2012

4 Post author: OpenThreadGuy 16 May 2012 07:36AM

If it's worth saying, but not worth its own post, even in Discussion, it goes here.

Comments (121)

Comment author: Grognor 17 May 2012 03:32:35PM *  14 points [-]

Walt Whitmanisms
The original:

Do I contradict myself? Very well then I contradict myself. I am large, I contain multitudes.

Sark Julian:

Do I make tradeoffs? Very well then I make tradeoffs. I am poor, I need to make compromises.


Do I repeat myself? Very well then, I repeat myself. You are large, you contain multitudes.


Am I signaling? Very well then, I am signaling. I am human; I am part of a tribe.

Steven Kaas:

Do I contradict myself? Very well, then I contradict myself. I am large, I can beat up anyone who calls me on it. #whitmanthebarbarian

Steven Kaas:

Do I have no opinion? Very well, then I have no opinion. I am small, I do not contain a team of pundits.

Comment author: Viliam_Bur 16 May 2012 11:43:12AM 12 points [-]

If you had to pick exactly 20 articles from LessWrong to provide the greatest added value for a reader, which 20 articles would you select?

In other words, I am asking you to pick "Sequences: Micro Edition" for new readers, or old readers who feel intimidated by the size and structure of Sequences. No sequences and subsequences, just 20 selected articles that should be read in the given order.

It is important to consider that some information is distributed in many articles, and some articles use information explained in previous articles. Your selection should make sense for people who have read nothing else on LW, and cannot click on hyperlinks for explanation (as if they are reading the articles on a paper, without comments). Do the introductory articles provide enough value even if you won't put the whole sequence to the selected 20? Is it better to pick examples from more topics, or focus on one?

Yes, I am hoping that reading those 20 articles would encourage the reader to read more, perhaps even the whole Sequences. But the 20 articles should provide enough value when taken alone; they should be a "food", not just an "appetizer".

It is OK to pick also those LW articles that are not part of the traditional Sequences. It is OK to suggest less than 20 articles. (Suggesting more than 20 is not OK, because the goal is to select a small number of articles that provide value without reading anything more.)

Comment author: Viliam_Bur 17 May 2012 03:03:53PM 4 points [-]

Now let's try it differently. Even if you feel that 20 articles is too small subset to describe the richness of this site, let's push it even further. Imagine that you can only list 10 articles, or 7 articles, 5 articles, 3 articles, or just 1 single best articles of the LessWrong. It will be painful, but please do your best.

Why? Well, unless one of us puts their selection of 20 articles on the wiki ignoring the others, the resulting selection will be a mix of something that you would select and something that you wouldn't. The resulting 20 articles will contain only 10 or maybe less articles from your personal "top 20" selection. So let's make it the best 10 articles.

However I ask you to avoid using strategies like this: "I think articles A and B are good. A is better than B, so if I have to choose only one article, I should have chosen A. But article A is widely popular, and most other people will probably choose it too, therefore I will pick B, which maximizes the chance that both A and B will be in the final selection." Please avoid this. Just pretend that the remaining articles will be chosen randomly (even if other people have already posted their choices), so you should really choose what you prefer most. Please cooperate on this Prisonner's Dilemma.

Also, please explain your reason behind selecting those articles. Maybe you see an aspect others are missing. Maybe others can suggest you another article which fulfills your goal better. (In other words, if you explain yourself, others can extrapolate your volition.)

Comment author: Viliam_Bur 17 May 2012 04:00:46PM *  2 points [-]

My choice, the most important three articles:

  • Why truth? And... -- it contains motivation for doing what we do, and explains the "Spock Rationality" misunderstanding
  • An Intuitive Explanation of Bayes' Theorem -- a biology/medicine example focusing on women, and an interactive math textbook (great to balance the LW bias: male, sci-fi, computers, impractical philosophy, nonstandard science)
  • Why Our Kind Can't Cooperate -- a frequent fail mode of unknowingly trying to reverse stupidity in real life, important for those who hope to have a rational community

then these:

  • How to Be Happy -- a lot of low-hanging fruit for a new reader, applying science to everyday life; bonus points for being written by someone else
  • Something to Protect -- bringing the motivation to the near mode; the moral aspect of becoming rational
  • Well-Kept Gardens Die By Pacifism -- a frequent fail mode of online communities; explanation of the LW moderation system

and then these:

Note: I think that each these articles can be read and understood separately, which in my opinion is good for total newbies. People are expecting short inferential distance, and you must first gain their attention before you can lead them further. If they will enjoy the MicroSequences, they will more likely continue with the Sequences. I also think these articles are not controversial or weird, so they will give a good impression to an outsider. The selection includes math, instrumental rationality, social aspects of rationality.

Funny thing, it was rather painful to reduce my suggested list to only 10 articles, but now I feel happy and satisfied with the result. Please make your own list, independently of this one. (Imagine that you have to select 10 or less articles for your friend.)

Comment author: Viliam_Bur 16 May 2012 01:33:27PM *  3 points [-]

OK, my first shot, probably just to encourage people to do better than this:

EDIT: Oops, it was more than 20, I was in a hurry. The more important (IMHO) articles are now marked by a bold font, with explanation added.

Comment author: RobertLumley 16 May 2012 05:16:25PM 4 points [-]

Seems to me like you need Mysterious Answers to Mysterious Questions in there. That's far and beyond one of my favorites.

Comment author: Grognor 16 May 2012 02:45:45PM *  3 points [-]

(You can single-space your posts by putting two spaces at the end of each line. Do this, for it will save scrolltime.)

I'm going to avoid repeating ones on your list, entirely because I think repetition is bad. Here I go:

The trouble with picking stand-alone posts is that Eliezer's sequences of posts are so much better.

Comment author: dbaupp 17 May 2012 07:29:47AM 2 points [-]

What are the ones you would include if you were including repeats? (Viliam_Bur is asking for an absolute top 20, not several independent lists of good posts.)

Comment author: TimS 18 May 2012 01:46:30PM *  1 point [-]

Who exactly is "Simple Truth" aimed at? As far as I can tell, the message is that worrying about the cashing out the meaning of truth is not worth the effort in ordinary circumstances. That's true, but it is a fully generalizable counter-argument to studying anything - worrying about the meaning of "quantum configuration" has no practical payoff, even though building things like computers relies on studying those sorts of things. Likewise, the meaning of truth is really hard if you actually examine it.

Put differently, religious people don't disagree with us about truth means, they disagree about what is actually true. And they are wrong, for the reasons detailed in "Making Beliefs Pay Rent." In short, no real person is analogous to Mark, so no real person's philosophical positions are contradicted by the story.

To repeat, the story doesn't solve any real questions about truth, it simply says they are practically [Edit] unimportant (which is true, but makes the story itself pretty unhelpful),

Comment author: Viliam_Bur 18 May 2012 03:36:21PM *  0 points [-]

For me the message of "Simple Truth" was that the intelligence should not be used to defeat itself. To be right, even if you can't define it to philosopher's satisfaction, is better than to be wrong, even if you can find some smart words to support that. The truth business is not about words (that's signalling business), but when you are right, nature rewards you, and when you are wrong, nature punishes you. (Although among humans, speaking truth can cause you a lot of trouble.) At the same time it explains the origins of our ability to understand truth -- we have this ability because having it was an evolutionary advantage.

Or maybe I just like that the annoying wise-ass guy dies in the end.

This is not about religious people, who disagree about what is actually true, as you said. This is about people who try to do "philosophy" by inventing more complex ways to sound stupid... errr... profound, and perhaps they even sometimes succeed to convince themselves. People who say things like "there is no truth", because for anything you say they can generate a long sequence of words that you just don't have time to analyze and debunk (and even if you did, they would just use a fraction of that time to generate a new sequence of words). If you didn't meet such people, consider yourself lucky, but I know people who can role-play Mark and thus ruin any chance of a rational discussion, and for a non-x-rational listener it often seems like their arguments are rather important and deep, and should be addressed seriously.

Anyway, the "Simple Truth" is kinda long, which I enjoyed, but other people may hate; so it is probably no harm in removing it, as long as "Making Beliefs Pay Rent" and "Something to Protect" stays in the list.

Comment author: TimS 18 May 2012 03:54:17PM 2 points [-]

the intelligence should not be used to defeat itself

I agree with this feeling, but "Do the impossible" or one of the nearby posts raises this point more explicitly and more effectively.

The problem with "Simple Truth" is that - beyond the message I highlighted - the text is too open ended. Mirror-like, the story contains whatever philosophical positions the reader wishes to see in it.

I know people who can role-play Mark

There are two possible kinds of people who can do this. (1) People with useful but complicated theories that you happen not to understand, and (2) stupid people - who might be poorly parroting a useful theory. Please don't let the (negative) halo effect of the second type infect your view of the first type of people.

Generally, your objection pattern matches with the argument that law is too complicated. Respectfully, I disagree.

Comment author: TheOtherDave 18 May 2012 03:19:36PM 0 points [-]

I think you mean "practically unimportant" in your last sentence.

I've always understood the purpose of that article to be to pre-emptively foreclose objections of the form "but being rational is irrelevant, because you can't really know what's true" by declaring them rhetorically out-of-bounds.

Comment author: TimS 18 May 2012 03:46:29PM 0 points [-]

Indeed a typo, thanks.

I've always taken the objection you mentioned as invoking the problem of reliability of the sense (i.e. Cartesian skepticism), not the meaningfulness of truth. In the story, Mark is no Cartesian skeptic (of course, it's hard to tell, because Mark is a terribly confused person)

I think skeptical objections to Bayesian reasoning are like questions about the origin of life directed at evolutionary theory. The criticisms aren't exactly wrong - it's just that the theory targeted by the criticism is not trying to provide an answer on that issue.

Comment author: JoachimSchipper 17 May 2012 09:04:02AM 0 points [-]

I'd add something like Keep your identity small, Beware Identity.

Comment author: [deleted] 16 May 2012 02:12:45PM *  1 point [-]
Comment author: moridinamael 16 May 2012 08:53:05PM 10 points [-]

I don't know if the intention here is to debate other people's choices, but: my wife started The Simple Truth because it was the first sequence post on the list and quickly became frustrated and annoyed that it didn't seem to lead anywhere and seemed to be composed of "in jokes." She didn't try to read further into the Sequences because of the bad impression she got off this article, which is an unusually weird, long, rambling, quirky article.

I actually like The Simple Truth but I don't feel that it makes a good introduction to the Sequences. But hey, this is just one data point.

Comment author: arundelo 18 May 2012 05:06:26PM *  4 points [-]

I predict that when your wife read "The Simple Truth" she was not acquainted with (or was not thinking about) the various theories of truth that philosophers have come up with. I like it a lot, but when I first read it I was able to see it as a defense of a particular theory of truth and a critique of some other ones.

(In particular, it's a defense of the correspondence theory, though see this thread.)

Edit: In other words, I think "The Simple Truth" appeals mainly to people who have read descriptions of the other theories of truth and said to themselves, "People actually believe that?!"

Comment author: moridinamael 18 May 2012 06:59:23PM 3 points [-]

You're correct. What I love about the Sequences in general is that it's a colloquial, patient introduction to lots of new concepts. In theory, even somebody with no background in decision theories or quantum mechanics can actually learn these concepts from the Sequences. The Simple Truth is significantly different in tone and style from the majority of Sequence posts and the concepts which that post satirizes are not really introduced before the comedy begins.

If you go to http://wiki.lesswrong.com/wiki/Sequences and choose the first option (1 Core Sequences), then choose the first listed subsequence (Map and Territory), the very first post is The Simple Truth. The second choice is What Do We Mean by Rationality? which really, really seems like it should be the first thing a newcomer reads.

Comment author: beoShaffer 16 May 2012 09:34:08PM 1 point [-]

I actually like The Simple Truth but I don't feel that it makes a good introduction to the Sequences.

Same here, though I think it does depend on the readers background. People who strongly disbelieve in the concept of objective truth might find it helpful to have that taken care of before starting the sequences proper, but even then I'm not sure if the simple truth is the best way.

Comment author: [deleted] 16 May 2012 08:58:48PM *  1 point [-]

You might be right--I'll have to re-read it. I put this list together based on my memory of what these posts are like, and given how volatile memories are, I may be mistaken about their quality.

Edit: You're right. I'll change my list accordingly.

Comment author: David_Gerard 16 May 2012 02:17:16PM 0 points [-]

Which twenty have the highest number of votes?

Comment author: [deleted] 16 May 2012 02:23:28PM 8 points [-]

These, but that's probably not the best way to go about making a list. Many of the top posts require prerequisites, and there are some equally good posts that are not as heavily upvoted because they were published on OB or in LW's infancy.

Comment author: JoachimSchipper 17 May 2012 06:33:05AM 0 points [-]

What is your intended audience, and what is the intended effect of reading these sequences? "Politics is the Mind-Killer" and "Well-Kept gardens die by pacifism" seem particularly relevant to online communities, for instance.

Comment author: Viliam_Bur 17 May 2012 08:32:30AM *  0 points [-]

It was intended for new people on LW, who should be introduced to our "community values" (even without reading the whole Sequences). Also for smart people outside LW, who are curious what is LW about; and might decide to join later.

In both cases, the goal is make clear what LW (and x-rationality) is, and what it is not, in a short amount of text. Perhaps writing a new text would be better, but making a selection of existing text should be quicker.

"Politics is the Mind-Killer" and "Well-Kept gardens die by pacifism" seem particularly relevant to online communities, for instance.

Yes, but I think they also apply well offline. People can discuss politics in person, too. The lesson of well-kept gardens is indirect: some people are net loss, and if you don't filter them out of your social network, your quality of life will go down.

Now I added some explanations to my list, so the message is like this:

  • there is such thing as a truth/territory, and it has consequences in real life
  • to know = to make good predictions
  • it's not about speaking mysteriously or using the right keywords, but about understanding the details
  • protect your values, don't use your intelligence to defeat yourself
  • don't let your emotions and biases make you stupid, but also don't try to reverse stupidity
  • a rational community is a great idea, but it requires specific skills
  • here is how to use rationality to improve your everyday life
Comment author: djcb 16 May 2012 06:38:04PM -1 points [-]

Nice idea - but maybe we should compress things further? I've read most of the sequences, but think/hope they could be condensed to about 10-20 pages with the core messages, in such a way that would be more accessible outside these realms.

Comment author: vi21maobk9vp 17 May 2012 04:58:50AM 1 point [-]

I guess the idea is to find 20 articles that provide both ideas and arguments out of those that already exist. After there is some solution, it becomes easier to write those 20 pages than when starting from scratch. Obviously, 20 paper pages you mention have yet to be written - and "20 best articles for isolated reading" may be written already.

Comment author: [deleted] 17 May 2012 11:46:58AM *  9 points [-]

Stuff by Yvain

On the applications of bad translation.

Comment author: NancyLebovitz 21 May 2012 05:05:04PM 1 point [-]

I think "words don't have meanings, people have meanings" is overdoing the concept, but not by much.

Comment author: thomblake 17 May 2012 07:50:48PM 0 points [-]

Nice find!

Comment author: Grognor 20 May 2012 01:26:59PM 6 points [-]
Comment author: tut 22 May 2012 12:06:29PM 0 points [-]

They also missed the theory that is shaped like a star, but without the extraneous nonsense in the middle. Which is exactly as simple as their preferred theory.

Comment author: othercriteria 22 May 2012 01:29:28PM 1 point [-]

So I'm entering an argument over fictional evidence, which is already a losing move, but who cares.

Taking the convex hull of the observations is obviously the right thing to do!

If you asked a mathematician for the simplest function from a point set in the plane to a point set in the plane, they'd flip a coin and say either the constant function that's always the empty set or the constant function that's always the plane. But that's silly, because those functions don't use your evidence.

(Other constant functions are out, because there's no way to pick between them.)

So if you asked a mathematician for the next simplest function from a point set in the plane to a point set in the plane, they'd say the identity function. That's not silly, but if you want a theory that's not just a recapitulation of your evidence, it won't help you.

(Projections or other ways of taking subsets are out because there's no natural way to pick individual points out.)

(Things like the mean are out because of measure-theoretic difficulties.)

So if you asked a mathematician for the next simplest function from a point set in the plane to a point set in the plane, they'd say the convex hull. It has all sorts of nice properties (idempotent, nondecreasing, etc.) and just sort of feels like the right thing to do with a point set.

On the other hand, sticking line segments between the points (and in a hard to specify order) is a few more "next"s down the list and only makes sense for finite point sets with pretty special geometry anyways.

Comment author: Kaj_Sotala 16 May 2012 12:39:27PM 14 points [-]

Welcome to Life: the singularity, ruined by lawyers.

(Humor, three-minute YouTube clip.)

Comment author: Tuxedage 18 May 2012 01:32:52AM *  5 points [-]

The dark arts are in action here. Beware, lest you may Generalize from fiction..

Comment author: TimS 18 May 2012 01:36:13PM 3 points [-]

It's an interesting vision, but lawyers have nothing to do with the problem. The problem is commercialization of something that or moral intuitions say should not be commercialized.

Being upset at lawyers about this state of affairs is like being angry at a concrete truck for building the foundation of a building in an offensive location.

Comment author: RobertLumley 17 May 2012 10:22:37PM *  2 points [-]

The majority of the stuff on that guy's website is pretty interesting. He's got several TED talks, one of which is essentially on prediction markets.

Comment author: Oscar_Cunningham 17 May 2012 11:51:37AM 5 points [-]
Comment author: Desrtopa 22 May 2012 01:59:09PM 5 points [-]

The most heavily downvoted post in Less Wrong history is actually not on that list. Curi's "The Conjunction Fallacy Does Not Exist" was removed by Eliezer on the basis of it being massively downvoted and too stupid to productively discuss.

Comment author: dbaupp 23 May 2012 09:11:38AM 1 point [-]

(If anyone wishes to see this article, it can be read on Curi's user page, but one can't view it or its comments directly.)

Comment author: NancyLebovitz 21 May 2012 05:06:18PM 1 point [-]

The link doesn't work.

Comment author: ghf 21 May 2012 07:19:38PM 3 points [-]

It works for me, but only after changing my preferences to view articles with lower scores (my cutoff had been set at -2).

Comment author: Oscar_Cunningham 21 May 2012 06:16:30PM *  0 points [-]

It works for me. ???

Comment author: JoshuaZ 21 May 2012 06:43:17PM *  0 points [-]

I can't get it to work either. Maybe just c&p the text?

Comment author: [deleted] 17 May 2012 07:39:00AM *  11 points [-]

Genes are overrated, genetics is underrated

by Razib Khan

... I agree one one thing in particular: an emphasis on concrete and specific genes for traits is a motif in science journalism that can be very frustrating, and often misleading. Nevertheless, that’s not the only story. I believe our current culture greatly underestimates the power of genetics in shaping broader social patterns.

How can these be reconciled? Do not genes and genetics go together? The resolution is a simple one: when you speak of 1,000 genes, you speak of no genes. You can’t list 1,000 genes in prose, even if you know them. But using standard quantitative and behavior genetic means one can apportion variation in the population of a trait to variation in genes. 1,000 genes added together can be of great effect. The newest findings in genomics are reinforcing assertions of non-trivial heritability of many complex traits, though rendering problematic attributing that heritability to a specific set of genes.

Comment author: billswift 17 May 2012 02:00:29PM *  0 points [-]

Genes and genetics go together in very nearly the same way as words and language.

Or, even more closely, as terms in a mass of spaghetti code.

Understanding the genetics of an organism is hard, because what they are trying to do is to simultaneously reverse engineer that mass of code and learn what the terms are.

Comment author: gwern 30 May 2012 07:30:10PM 4 points [-]

Wikipedia experiment finished: http://www.gwern.net/In%20Defense%20Of%20Inclusionism#sins-of-omission-experiment-2

Close to zero resistance to random deletions. Most disappointing.

Comment author: wedrifid 30 May 2012 08:10:09PM 0 points [-]

I was persuaded.

Comment author: Luke_A_Somers 17 May 2012 02:43:38PM 4 points [-]

Dinosaur Comics today involves WBE


Comment author: Grognor 17 May 2012 07:47:33PM *  11 points [-]

I once asked Ryan North, via the twitters, if he was a transhumanist. He said he wouldn't accept the label, but T-Rex is obviously a transtyrannosaurist.

Comment author: cousin_it 16 May 2012 02:50:06PM *  4 points [-]

Some vague ideas about decision theory math floating in my head right now. Posting them in this raw state because my progress is painfully slow and maybe someone will have the insight that I'm missing.

1) thescoundrel has suggested that spurious counterfactuals can be defined as counterfactuals with long proofs. How far can we push this? Can there be a "complexity-based decision theory"?

2) Can we write a version of this program that would reject at least some spurious proofs?

3) Define problem P1 as "output an action that maximizes utility", and P2 as "output a program that solves P1". Can we write a general enough agent that solves P1 correctly, and outputs its own source code as the answer to P2? To stop the agent from solving P1 as part of solving P2, we can add a resource restriction to P2 but not P1. This is similar to Eliezer's "AI reflection problem".

Comment author: gRR 17 May 2012 03:51:53AM 0 points [-]

Thoughts on problem 3:

def P1(): sumU = 0; for(#U=1; #U<3^^^3; #U++): if(#U encodes a well-defined boundedly-recursive parameterless function, that calls an undefined single-parameter function "A" with #U as a parameter): sumU += eval(#U + #A) return sumU def P2():
sumU = 0; for(#U=1; #U<3^^^3; #U++): if(#U encodes a well-defined boundedly-recursive parameterless function that calls an undefined single-parameter function "A" with #U as a parameter): code = A(#P2)
sumU += eval(#U + code) return sumU def A(#U): Enumerate proofs by length L = 1 ... INF if found any proof of the form "A()==a implies eval(#U + #A)==u, and A()!=a implies eval(#U + #A)<=u" break; Enumerate proofs by length up to L+1 (or more) if found a proof that A()!=x return x return a

Although A(#P2) won't return #A, I think eval(A(#P2)(#P2)) will return A(#P2), which will therefore be the answer to the reflection problem.

Comment author: gRR 16 May 2012 10:46:51PM *  0 points [-]

2) Can we write a version of this program that would reject at least some spurious proofs?

It's trivial to do at least some:

def A(P): if P is a valid proof that A(P)==a implies U()==u, and A(P)!=a implies U()<=u and P does not contain a proof step "A(P)=x" or "A(P)!=x" for any x: return a else: do whatever
Comment author: cousin_it 17 May 2012 12:29:32AM *  0 points [-]

Sure, but that's too trivial for my taste :-( You understand the intent of the question, right? It doesn't call for "an answer", it calls for ideas that might lead toward "the answer".

Comment author: gRR 17 May 2012 01:09:18AM 1 point [-]

To tell the truth, I just wanted to write something, to generate some activity. The original post seems important and useful, in that it states several well-defined and interesting problems. Seeing it staying alone in a relative obscurity of an Open Thread even for a day was a little disheartening :)

Comment author: JoachimSchipper 16 May 2012 11:29:07AM 4 points [-]

I know quite a bit about crypto and digital security. If I could find the time to write something, which won't be soon, is there something that would interest LessWrong? (If you just want to read crypto stuff, Matthew Green's blog is good; "how to protect a nascent known-to-be-actually-working GAI from bad guys" will read like "stay the fsck away from any mobile phones and the internet and don't trust your hardware; bring an army", which won't be terribly interesting.)

Comment author: [deleted] 20 May 2012 10:01:50AM *  9 points [-]

Ovulation Leads Women to Perceive Sexy Cads as Good Dads (HT: Heartiste)

Why do some women pursue relationships with men who are attractive, dominant, and charming but who do not want to be in relationships—the prototypical sexy cad? Previous research shows that women have an increased desire for such men when they are ovulating, but it is unclear why ovulating women would think it is wise to pursue men who may be unfaithful and could desert them. Using both college-age and community-based samples, in 3 studies we show that ovulating women perceive charismatic and physically attractive men, but not reliable and nice men, as more committed partners and more devoted future fathers. Ovulating women perceive that sexy cads would be good fathers to their own children but not to the children of other women. This ovulatory-induced perceptual shift is driven by women who experienced early onset of puberty. Taken together, the current research identifies a novel proximate reason why ovulating women pursue relationships with sexy cads, complementing existing research that identifies the ultimate, evolutionary reasons for this behaviour.

I think it is isn't much disputed that ovulating women seem to find dark tirade and some other personality traits more sexy when ovulating, so to me the above sounded like a clear example of the halo effect. Sexy men will seems smarter and kinder than they are, because any positive trait seems to beef up our perceptions of people in other areas as well. But even as my mind slowly noted that this should effect how they see the odds of a man caring for other women's children and that I don't have any info to suggest that women are more prone to halo effect for male sexiness in general during ovulation, I saw the authors had considered this:

Finally, there were no main effects of fertility or fertility by target male interactions for any of the other positive attributes: attractiveness, financial status, and social status (all ps  .33). Ovulation also had no effect on the perception of men’s attractiveness (Mlow fertility dad  5.06, Mhigh fertility dad  4.73; Mlow fertility cad  5.79, Mhigh fertility cad  5.65), financial status (M low fertility dad  4.76, Mhigh fertility dad  4.77; Mlow fertility cad  5.64, Mhigh fertility cad  5.64), or social status (Mlow fertility dad  4.82, M high fertility dad4.74;Mlow fertility cad6.21,Mhigh fertility cad6.07). The ovulatory-induced perception of paternal investment, therefore, is not produced by a halo effect when women evaluate sexy cads at high fertility.

Study 2 also tested whether the ovulatory-induced overperception of paternal investment was a product of a broader ovulatory-induced halo effect that occurs when women evaluate attractive and charismatic men. The results showed that there was no ovulatory effect on women’s perceptions of the sexy cad’s attractiveness, financial status, or social status. Thus, ovulation appears to shift women’s perceptions of a man’s willingness to invest in her offspring specifically, but not his other positive traits.

I guess heterosexual women should be conscious of this bias, especially those desiring family formation or perhaps when judging in other contexts about which adult men they want their children to interact with. While obviously they probably aren't wrong about how sexy they find someone, they are biased when it comes to the other traits they, judging from their stated preferences, seek to maximize in such men.

Comment author: [deleted] 19 May 2012 08:30:25PM *  3 points [-]

The Essence Of Science Explained In 63 Seconds

A one minute piece of Feynman lecture candy wrapped in reasonable commentary. Excellent and most importantly brief intro level thinking about science and our physical world. Apologies if it has been linked to before, especially since I can't say I would be surprised if it was.

Here it is, in a nutshell: The logic of science boiled down to one, essential idea. It comes from Richard Feynman, one of the great scientists of the 20th century, who wrote it on the blackboard during a class at Cornell in 1964. YouTube

Think about what he's saying. Science is our way of describing — as best we can — how the world works. The world, it is presumed, works perfectly well without us. Our thinking about it makes no important difference. It is out there, being the world. We are locked in, busy in our minds. And when our minds make a guess about what's happening out there, if we put our guess to the test, and we don't get the results we expect, as Feynman says, there can be only one conclusion: we're wrong.

The world knows. Our minds guess. In any contest between the two, The World Out There wins. It doesn't matter, Feynman tells the class, "how smart you are, who made the guess, or what his name is, if it disagrees with the experiment, it is wrong."

This view is based on an almost sacred belief that the ways of the world are unshakeable, ordered by laws that have no moods, no variance, that what's "Out There" has no mind. And that we, creatures of imagination, colored by our ability to tell stories, to predict, to empathize, to remember — that we are a separate domain, creatures different from the order around us. We live, full of mind, in a mindless place. The world, says the great poet Wislawa Szymborska, is "inhuman." It doesn't work on hope, or beauty or dreams. It just...is.

Comment author: shminux 18 May 2012 03:03:21PM *  3 points [-]

A low-inferential-distance perspective on the inferential distance concept.

Comment author: JoshuaZ 29 May 2012 05:46:08PM 2 points [-]

Using large scale genetic sequencing has for the first time found the cause of a new illness. Short summary here and full article here. In this situation, an individual had a unique set of symptoms, and by doing a full exome scan for him and his parents they were able to successfully locate the gene that was creating the problem and understand what was going wrong.

Comment author: RomeoStevens 22 May 2012 12:49:12AM 2 points [-]

Can someone help me corrupt this wish?

"Give humans control over their own sensory inputs."

Comment author: JoshuaZ 22 May 2012 12:55:10AM 2 points [-]

Everyone falls into a coma where they get to control their own individual apparent reality. Meanwhile they all starve to death or run into other problems because nothing about the wish says they need to stay alive.

Comment author: RomeoStevens 22 May 2012 12:56:35AM 2 points [-]

Doesn't discontinuation of the sensory experience count as a lack of control?

Comment author: Desrtopa 22 May 2012 01:47:43PM 1 point [-]

Well, the wish doesn't say "give me the ability to control my sensory experience forever". If you die, your ability to control your body is discontinued, but that doesn't mean you couldn't control your body.

Comment author: RomeoStevens 22 May 2012 06:55:34PM 1 point [-]

can you expand a little on this?

Comment author: Desrtopa 22 May 2012 07:56:04PM 1 point [-]

Suppose that a person with locked-in-syndrome wished for voluntary control of their body. Their disorder is completely cured, and they gain the ability to control their body like anyone else. Would you say that their wish wasn't really granted unless they never die?

Comment author: RomeoStevens 22 May 2012 08:24:45PM 0 points [-]

personally yes, but I realize this is strange.

Comment author: JoshuaZ 22 May 2012 01:01:17AM 1 point [-]

Hmm, possibly. But everyone stuck in their own sensory setting with no connection to anyone else is still pretty bad.

Comment author: RomeoStevens 22 May 2012 01:22:52AM *  0 points [-]

You aren't necessarily stuck anywhere. How the statement "I want to talk to Brian" gets unpacked once the wish has been implemented depends on how "control" gets unpacked. Any statement we make about sensory experiences we wish to have involve control only on one conceptual level. We can't control what Brian says once we're talking to him, but we never specified that we wanted control over it either. I think that you wind up with a conflict where you ask for control on the wrong conceptual level, or two different levels conflict. I'm having trouble coming up with examples though.

Comment author: JoshuaZ 22 May 2012 01:49:59AM 1 point [-]

And if "I want to talk to Brian" is parsed that way doesn't that require telling Brian that someone wants to talk to him, which for at least a few seconds takes control away from Brian of part of his sensory input?

Comment author: RomeoStevens 22 May 2012 05:29:48AM *  1 point [-]

So a problem is that it would be impossible to know what options to make more obviously available to you. If the action space isn't screened off the number of options you have is huge. There's no way to present these options to a person in a way that satisfies "maximum control". As soon as we get into suggesting actions we're back to the problem of optimizing for what makes humans happy.

This is highly helpful BTW.

Comment author: CuSithBell 22 May 2012 08:41:31PM *  1 point [-]

None of that control is automated, and this manual control is the only source of input.

Comment author: RomeoStevens 22 May 2012 10:48:55PM 3 points [-]

hahaha please specify wavelengths of light that will hit each receptor. Very good.

Comment author: CuSithBell 23 May 2012 05:58:10PM 0 points [-]

Exactly! It'd be pretty sucky.

Comment author: NancyLebovitz 21 May 2012 08:26:58PM 2 points [-]

Setting up policies to discuss politics without being mind-killed-- I'm linking to this in the early phase because LWers might be interested in following the voluminous discussions on that site to see whether this is possible, and it will be easier to start from the beginning, and also possible to make predictions.

Comment author: [deleted] 20 May 2012 01:31:54PM 2 points [-]

I would be interested in setting up an online study group, preferably via google hangout or skype for several key sequences that I want to grok and question more fully. Anyone else interested in this?

Comment author: JoachimSchipper 22 May 2012 06:15:55PM 3 points [-]

I currently do not have time, but it may be helpful if you state which sequences you intend to look at.

Comment author: [deleted] 22 May 2012 06:27:27PM 1 point [-]

Meta-ethics for starters.

Comment author: JoachimSchipper 22 May 2012 06:48:10PM 1 point [-]

Good choice - I've read all of it, and I still don't have a really good idea what it says. Please do post something if you can make an accessible and concise summary.

Comment author: Jabberslythe 18 May 2012 11:46:22PM 2 points [-]

I haven't heard this problem mentioned on here yet: http://www.philosophyetc.net/2011/04/puzzle-of-self-torturer.html

What do you think of the puzzle? Do you think the analysis here is correct?

Comment author: Oscar_Cunningham 19 May 2012 02:29:16AM 0 points [-]

It's a good puzzle, and the analysis dealing with it is correct.

Comment author: steven0461 19 May 2012 02:38:15AM *  1 point [-]

How is it even possible for A and B to be indiscriminable, B and C to be indiscriminable, but A and C to be discriminable? It seems like if A and B cause the exact same conscious thoughts (or whatever you're updating on as evidence), and B and C do, then A and C do. I think in practice, what's more likely is that you can very weakly probabilistically discriminate between any two adjacent states.

Comment author: TheOtherDave 19 May 2012 03:07:50PM 2 points [-]

If the difference between A and B is less than the observer's just-noticeable-difference, and the difference between B and C is as well, it doesn't follow that the difference between A and C is.

Comment author: crazy88 28 May 2012 11:46:57PM 1 point [-]

Frank Arntzenius (a philosopher at Oxford) has argued something along these lines.

I don't think that article is paywalled (though I'm using a university computer, logged on to my account so I'm not sure whether I automatically get passed through any paywall that may exist).

Comment author: tut 19 May 2012 10:00:19AM *  0 points [-]

Chunking of sensory input happens at a lower layer in the brain than consciousness. So if you have learned that two stimuli are the same then they might be indistinguishable to you unless you spend thousands of hours deliberately practicing distinguishing them even if there is a detectable difference, and even if you can distinguish stimuli that are just a bit further apart.

Comment author: wedrifid 25 May 2012 10:39:27PM *  3 points [-]


First time this has happened since the 30day karma score was implemented. Lesswrong addictions are apparently easy to squelch!

Comment author: wedrifid 28 May 2012 10:51:49PM 1 point [-]


I also like this one. Lucky timing to check in at the round number!

Comment author: TheOtherDave 25 May 2012 10:41:13PM 1 point [-]

Go you!
I've noticed your absence, FWIW.

Comment author: khafra 16 May 2012 12:38:53PM 2 points [-]

I like the Operations Research subreddit. Other people looking for applied rationality might like it, too. This probablistic analysis of problems with federal vanpools is a characteristic example.

Comment author: gwern 16 May 2012 04:24:59PM 0 points [-]

Looks interesting; I've subscribed.

Comment author: shminux 19 May 2012 12:00:36AM 1 point [-]

Suppose that, after some hard work, EY or someone else proves that a provably-friendly AGI is impossible (in principle, or due to it being many orders of magnitude harder than what can reasonably be achieved, or because a spurious UFAI is created along the way with near certainty, or for some other reason).

What would be a reasonable backup plan?

Comment author: JoshuaZ 29 May 2012 05:43:45PM 1 point [-]

Try really hard to get reasonably safe oracle AI? Focus on human uploading first?

Comment author: shminux 29 May 2012 06:08:56PM 1 point [-]

All good questions, I hope someone at SI asks them, instead of betting on a single horse.

Comment author: Zaine 16 May 2012 10:25:44AM *  1 point [-]

To give potentially interested parties a greater chance of learning about Light Table, I'm reposting about it here:

"I know there are many programmers on LW, and thought they might appreciate word of the following Kickstarter project. I don't code myself, but from my understanding it's like Scrivener for programmers:


Comment author: NancyLebovitz 16 May 2012 02:41:44PM *  1 point [-]

It sounds like it might be a useful program for any complicated project, even if the project isn't a program.

Comment author: vi21maobk9vp 17 May 2012 05:09:08AM 3 points [-]

As a programmer, I am tempted to say "unless the project is actually a large program". "Large" is relative, of course.

Of course, I have seen LightTable before the comment on LW, and I tried to imagine applying it to any basically data-crunching (as oppposed to mostly UI) program. Visualising computation may look like a good idea. Unfortunately, at the level it is demonstrated in the demo, it is simple enough for anyone who even tries to write a big program to keep it in mind.

When you have multiple layers of abstraction and each of them has a reason to do non-trivial double loops (which is not that much if you can say what each level is doing and why) what we see in demo would become overcluttered. I am not sure whether LightTable demo will grow into a tool to make UI fine-tuning more comfortable or it will try to invent some approaches that work for back-ends and isolated data-crunching. In the former case it will stay a niche thing but may become a well-olished narrow-focus tool. In the latter case it will have to transform so much that it is hard to tell whether the current developer will succeed.

Comment author: David_Gerard 26 May 2012 11:35:53PM *  1 point [-]

Luke's comment on just how arse-disabled SIAI was until quite recently (i.e., not to any unusual degree) inspired me to read Nonprofit Kit For Dummies, which inspired me to write a blog post telling everyone to buy it. Contains much of my bloviating on the subject of charities from LessWrong over the past couple of years. Includes extensive quote from Luke's comment.

Comment author: beoShaffer 20 May 2012 04:35:48PM 1 point [-]

Does any know of any good online recourses for Bayesain statistics? I'm looking for something fairly basic, but beyond the here's what Bayes theorem is level that Khan academy offers.

Comment author: RomeoStevens 22 May 2012 06:59:38PM 1 point [-]

pick up a used textbook for cheap, I don't remember which one is good, but there's a textbook recommendation thread somewhere.

Comment author: Bill_McGrath 19 May 2012 08:40:54AM 1 point [-]

I'm hoping to do some reading on music cognition. I've got a pretty busy few months ahead, so I can't say how far I'll get, and I'm not used to reading scientific literature, so it'll be slow going at first I'm sure, but I'd like to get a better grasp of this field.

In the vein of lukeprog's posts on scholarship, does anyone here know anything on this field, or where I might begin to learn about it? I've got access to a library with a few books dealing with the psychology of music and I can get online access to a small few journals. I've also read most of Levitin's Music and Your Brain which is a reasonably good pop-science (and largely pop-music) introduction to the topic, and Wikipedia actually has a graded reading list that seems promising.

Any other thoughts?

Comment author: michaelcurzi 16 May 2012 02:15:21PM 1 point [-]

This play in NYC looks pretty sweet. It looks like it touches on some concepts like Godshatter, idea from Three Worlds Collide, and a healthy understanding of the idea that technology could make us very very different from who we are now.

While exploring many of the common ideas that come attendant with our fascination with A.I., from Borglike interfaced brains to 2001-esque god complexes, DEINDE is particularly focused on two aspects: how to return to being "normal" after experiencing superhuman intelligence, and how, or if we should, return from the experience of being deeply networked with one another. Would we forsake enhanced intellect or profound psychic connection, once felt?

Looks like it's stopped running for now, though.

Comment author: knb 30 May 2012 11:13:41AM 0 points [-]

I seem to remember someone posting on Less Wrong about software that locks your computer to only doing certain tasks for a given period (to fight web-surfing will-power failures, I guess). After some cursory digging on the site, I couldn't find it. Does anybody remember the thread were this kind of self-binding software was discussed or at least the name of some brand of this software?

(Ideally I would like to read the thread first, and get a sense of how well this works.)

Comment author: CWG 30 May 2012 04:46:32AM *  0 points [-]

How old are you?

I'm 41. I'm curious what the age distribution is in the LW community, having been to one RL meetup and finding I was the oldest one there. (I suspect I was about 1.8 times the median age.)

I love what the LW community stands for, and age isn't a big deal... youthful passion is great (trying to hold onto mine!) and I suspect there isn't a particularly strong correlation between age and rationality, but life experience can be valuable in these discussions. In particular, having done more dumb things and believed more irrational things, and gotten over them.

Comment author: gwern 29 May 2012 10:40:42PM *  0 points [-]

Iodine post up: http://www.gwern.net/Nootropics#iodine

I've been working on this off and on for months. I think it's one of my better entries on that page, and I imagine some of the citations there will greatly interest LWers - eg. not just the general IQ impacts, but that iodization causes voters to vote more liberally.

I also include subsections for a meta-analysis to estimate effect size, a power analysis using said effect size as guidance to designing any iodine experiment, and a section on value of information, tying all the information together.

My general conclusion is that it looks like I should take some iodine, but currently self-experimentation is just too hard to do for iodine.

Comment author: Kindly 29 May 2012 03:36:27PM 0 points [-]

Ever since getting an apartment of my own I've found that, well, I spend more time alone than I used to. Rather than try to take every possible action to ensure that I'm alone as little as possible (which is desperate some of the time and silly a lot of the time) I want to try to learn to like being alone.

So what are some reasons to enjoy spending time alone as opposed to spending it with other people? Or other suggestions about how to self-modify in this way?

Comment author: knb 30 May 2012 11:05:25AM *  1 point [-]

Not sure if this counts as "alone" but you could schedule regular skype video calls with friends/relatives. It took some doing, but I'm a lot happier living alone when I still see and talk to my family a few times a week. I'm actually surprised more people don't do this.

Comment author: Kindly 30 May 2012 05:09:39PM 0 points [-]

Thank you for your advice, but I don't think that's exactly what I'm looking for. Rather than seek out human contact because I'm not comfortable being alone, I would rather be comfortable being alone and then seek out human contact for its own sake.

Comment author: Gastogh 22 May 2012 10:12:29PM 0 points [-]

I'm looking for a book recommendation on anthropology. I have almost no prior knowledge of the field. I'm after something roughly equivalent to what The Moral Animal was for evolutionary psychology: from-the-ground-up stuff that works by itself and doesn't assume assume significant background knowledge or further reading for a payoff. An easily accessible pop-writing approach à la The Moral Animal is a must-have; I can't read academic textbooks.

Comment author: NancyLebovitz 21 May 2012 05:42:05PM 0 points [-]

I'm reading Ursula Vernon's Digger (nominated for the Graphic Novel Hugo), and it's very much in the spirit of extrapolating logically from odd premises. Digger (a wombat) is sensible and pragmatic and known to complain about how irresponsible Dwarves are for using magic to shore up their mines.

Comment author: Suryc11 21 May 2012 12:48:13AM *  0 points [-]

My major (field of study) in college/university is most likely going to be philosophy. I'm an avid reader of this blog, and as such have internalized many LW concepts and terminology, particularly relating to philosophy. In short, should I cite this site if I make use of a LW concept - learnt several years ago on here - in a paper for a philosophy class? If yes (and I'm leaning towards yes), how?

In general, if one internalizes a blog-specific idea off of the Internet and then, perhaps unintentionally, includes it in a somewhat unrelated undergraduate paper, how does one go about referencing the blog - especially if the idea came from a comment that has since disappeared and/or cannot be found?

This is so far hypothetical, but I am sure that this situation will occur at least once in the next few years.

Comment author: beoShaffer 21 May 2012 04:34:00AM *  1 point [-]

How you cite it depends on the citation format for the paper as a whole, but most major formats now have instructions on how to cite blogs. So check the reference book/website for whatever formatting school section about how to cite blogs. A decent example is the owl's guide to citing "electronic resources" in MLA which is a fairly common style for philosophy papers.

Edit-fixed typo

Comment author: [deleted] 20 May 2012 02:34:52PM *  0 points [-]

An excellent debate between SIAI donor Peter Thiel and George Gilder on:

"The Prospects for Technology and Economic Growth"

I suggest skipping the first 8 minutes since they are mostly intro fluff. Thiel makes a convincing case that we are living in a time of technological slowdown. His argument has been discussed on LessWrong before.

Comment author: NancyLebovitz 22 May 2012 10:28:59PM *  2 points [-]

I found Gilder so annoying (information does not trump the laws of physics!!) that I listened to this instead-- Thiel and Niall Fergusson at Harvard.

Does Gilder say anything intelligent? If he doesn't, does he get squashed flat?

Comment author: AlexSchell 17 May 2012 12:59:29PM 0 points [-]

There is an obvious-in-retrospect symmetry between overconfidence and underconfidence in one's predictions. Suppose you have made a class of similar predictions of the form A and have on average assigned 0.8 confidence to them on average, while 60% actually came true. You might say that you are suffering from overconfidence in your predictions. But when you predict A with confidence p, you also predict ~A with confidence (1-p): you have on average assigned 0.2 confidence to your ~A-type predictions, while 40% actually came true. So if you are overconfident in your A-type predictions you're bound to be underconfident in your ~A-type predictions.

Intuitively, overconfidence and underconfidence feel like very different sins. It looks like this is due to systematic tendencies in what we view as a prediction and what we don't -- in the exercise above, assuming the set of A-type beliefs is self-selected, it seems that the A-type beliefs count as "predictions" whereas ~A-type beliefs don't. Some potential factors in what counts as a "prediction": belief > 0.5; hope that the prediction will come true; the prediction is very specific and yet assigned a substantial credence (say, above 0.1), so is supported by a lot of evidence, whereas the negation is a nonspecific catch-all.

Comment author: Jayson_Virissimo 17 May 2012 01:19:37PM *  0 points [-]

There is an obvious-in-retrospect symmetry between overconfidence and underconfidence in one's predictions. Suppose you have made a class of similar predictions of the form A and have on average assigned 0.8 confidence to them on average, while 60% actually came true. You might say that you are suffering from overconfidence in your predictions. But when you predict A with confidence p, you also predict ~A with confidence (1-p): you have on average assigned 0.2 confidence to your ~A-type predictions, while 40% actually came true. So if you are overconfident in your A-type predictions you're bound to be underconfident in your ~A-type predictions.

Intuitively, overconfidence and underconfidence feel like very different sins. It looks like this is due to systematic tendencies in what we view as a prediction and what we don't -- in the exercise above, assuming the set of A-type beliefs is self-selected, it seems that the A-type beliefs count as "predictions" whereas ~A-type beliefs don't. Some potential factors in what counts as a "prediction": belief > 0.5; hope that the prediction will come true; the prediction is very specific and yet assigned a substantial credence (say, above 0.1), so is supported by a lot of evidence, whereas the negation is a nonspecific catch-all.

Yeah, we have discussed this before.

Comment author: maia 16 May 2012 03:51:20PM 0 points [-]

Question about anti-akrasia measures and precommitments to yourself.

Suppose you need to do action X to achieve the most utility, but it's somewhat unpleasant. To incentivize yourself, you precommit to give yourself reward Y if and only if you do action X. You then complete action X. But now reward Y has become somewhat inconvenient to obtain.

Should you make the effort to obtain reward Y, in order to make sure your precommitments are still credible?

Comment author: shminux 16 May 2012 03:57:34PM 5 points [-]

But now reward Y has become somewhat inconvenient to obtain. Should you make the effort to obtain reward Y, in order to make sure your precommitments are still credible?

Is there an equivalent reward that is easier to obtain?

Comment author: sixes_and_sevens 16 May 2012 04:33:52PM 1 point [-]

Can you provide some specific examples?

Comment author: Grognor 16 May 2012 05:09:28PM *  2 points [-]

Let me make one.

Suppose you are reading your favorite blogs, when the idea strikes you, "Okay, I need to do X, but I can't do it without an incentive. I shall order chicken wings, which are delicious, upon X's completion."

Dozens of minutes later, X is finished! But wait! You fell victim to the planning fallacy! Everywhere in the city that delivers chicken wings is closed now because X took longer than you thought it would.

In this case, it would be fairly senseless to wait until the next day to order the wings, as by then the reward would be completely disconnected from the action. Driving 35 minutes to get them would also be pretty senseless. I don't know about driving 15 minutes.

This seems like a fairly difficult problem, but also one that simply will not occur very often, especially if you make your incentive something that's unlikely to be difficult to obtain by the time you finish X.

Comment author: sixes_and_sevens 17 May 2012 11:06:17AM 2 points [-]

That's how I interpreted it as well, but I'm not sure the OP is distinguishing the signalling purpose of pre-commitment strategies from mechanisms of pre-commitment..

Reputations of pre-commitment are about signalling credible consequences in circumstances of asymmetric information. When bargaining with oneself, information is about as symmetric as it can get. It's not like you mistrust your future self's willingness to go through with getting chicken wings. Any obstacle to getting them is transparent to all parties (you), and shouldn't impact your future expectation of being able to reward yourself unless you're staggeringly incompetent at obtaining chicken wings. If that's the case, you'll probably factor this in when planning your incentive.

Mechanisms of pre-commitment are a more salient tool when bargaining with oneself over time (cf. dynamic inconsistency ), but only when your goals are inconsistent over time. Post-X you presumably wants chicken wings as much as pre-X you, but is more informed about the cost of obtaining them. There is presumably a level of expense pre-X you would sensibly commit to for the specified reward. If some sort of catastrophe occurred as soon as you'd finished X, pre-X you wouldn't expect post-X you to crawl through the dust with your one remaining limb muttering "must...get...chicken...wings..."

The issue seems to boil down to "are you staggeringly incompetent at rewarding yourself? If not, don't worry."

Comment author: [deleted] 16 May 2012 11:16:48PM *  0 points [-]

Are you entering into a sub function of the original x/y assessment here? As in if X is done, Y, but Y is a function in itself of assessing the optimal reward for X?

If it's still important to add a reward of Y (in addition to the personal value of having completed X), you probably need to substitute with something novel and maintain the understanding that it is a reward for X (even if not the originally scoped one).

Comment author: beoShaffer 16 May 2012 04:05:20PM 0 points [-]

It depends on the difficulty of obtaining Y relative to its pleasantness, but in general I would say yes. Specifically, good anti-akrasia measures are valuable enough that you should be willing to go through quite a bit of effort to persevere them. Thus if the effect of obtaining Y in these circumstances is to persevere your precommitmint ability and than it is worth going expending a large amount of effort on Y. But, you should also keep in mind the possibility that you will develop a negative association between fulling your precomintmints and then having to go through a large amount of effort for a reward that isn't worth it. Is their some "guilty pleasure" or other suitable reward that you could substitute for Y, that would be keeping in the spirt of the bargain you made with yourself?