I have a related motivational problem with To Do lists. I find they help me remember all the things I have to do during a day, but I seem to get the same feeling of accomplishment when I cross off some trivial errand as when I accomplish something major. The end result is that trivial errands get done, while the important tasks often get left behind.
The standard solution to this in the productivity lit is concepts like Big Rocks and Eat That Frog!, where you build your day/week around your major tasks, always doing them first.
http://zenhabits.net/2007/04/big-rocks-first-double-your-productivity-this-week/ http://books.google.com/books?id=R3iBRVOX1tIC
I seem to get the same feeling of accomplishment when I cross off some trivial errand as when I accomplish something major. The end result is that trivial errands get done, while the important tasks often get left behind.
You could always exploit that by adding more things to your to-do list, such as adding tasks to break down other tasks into even more tasks. ;-)
There are a few time management systems I know of that actually do have built-in adjustments for this tendency. Mark Forster's Autofocus system allows you to cross of a task after spending as little as one minute on it -- you just write down the next piece(s) at the end of the list, even if all you did on the task was to break it down into smaller pieces. And The Pomodoro Technique has you break large tasks down, or combine small tasks up, to form units called "pomodoros".
I think the standard GTD advice is that you're supposed to be breaking the important tasks down into a lot of little trivial tasks.
I have tried that strategy before, and I found it disastrously bad. First, breaking large tasks into small ones, and tweaking the breakdown, is a task itself, which can be used to procrastinate from things that actually need doing. And second, it makes todo lists appear very large, which gave me decision paralysis when it came time to pick something from the list.
gwern:
I think the standard GTD advice is that you're supposed to be breaking the important tasks down into a lot of little trivial tasks.
jimrandomh:
I have tried that strategy before, and I found it disastrously bad. First, breaking large tasks into small ones, and tweaking the breakdown, is a task itself, which can be used to procrastinate from things that actually need doing. And second, it makes todo lists appear very large, which gave me decision paralysis when it came time to pick something from the list.
Here's a terrific example of how people end up saying they tried things that didn't work, when they aren't actually talking about the same thing that's being suggested.
In this particular case, the way in which tasks are broken down is important. GTD does not say to break a task down into its component parts and add them all to your lists. In fact, ISTR it advising against this, for the very reasons jimrandomh describes.
What GTD advises you is to look at a large task on your "projects" list, and simply write down the NEXT trivial task that needs to be done on it. The breakdown is parallel/incremental, occurring each time you review your projects list or finish a task related to the project, and think of something else to do.
Human beings are rather bad at discussing cognitive algorithms in general - we tend to dramatically simplify our descriptions in ways that make algorithms with significant differences in steps and impact sound "the same".
And I'm quoting both gwern and jimrandomh here because it's not that either of them made a mistake, per se: gwern described the GTD strategy in a way that is incomplete but not false, and neither did jimrandomh make a false statement about GTD; he made a true statement about something that is not GTD.
However, a third party reading this exchange could easily come to the conclusion that they had just read a report of GTD sucking -- and choose not to investigate or evaluate GTD!
This happens with the discussion of almost any method of thinking or organizing: simplified not-false descriptions of technique X are then linked to procedure Y which fits the same description but doesn't work as well, leading to an eventual widespread conclusion among non-insiders of X that X "doesn't work".
When teaching cognitive algorithms, it is important to be precise, since parallel, serial, incremental, etc. algorithms have very different performance characteristics, memory and "hardware" requirements, etc., even when they're run on the human platform.
(Edit to add: Note that this is not a fully general argument for dismissing criticisms of X; it is an argument for making damn sure you reduce X and Y to concretely-defined steps that include all of X's claimed distinctions, before you apply criticism of Y to criticism of X. And the more "popular" X is, the more important it is to do this, because the more popular it is, the more likely you are to have heard only a watered-down version of it.)
What a nice comment! The next time I feel tempted to describe how GTD solves some problem or other, I'll wait until the urge passes and message you instead; clearly my one read through Getting Things Done has taught me just enough to be dangerous.
What a nice comment! The next time I feel tempted to describe how GTD solves some problem or other, I'll wait until the urge passes and message you instead; clearly my one read through Getting Things Done has taught me just enough to be dangerous.
Er, I'm not able to tell if you're being serious or sarcastic here. But do note that I was just pointing out a systemic problem with talking about cognitive algorithms, not criticizing you OR jimrandomh for attempting to do so. In particular, I didn't say your statement was false or incorrect, just that it was imprecise enough for somebody else to project a different meaning onto it.
My own experience trying to teach things is that in any one-way communication about cognitive techniques, it is virtually impossible to prevent this kind of projection, because you not only have to state all the distinctions, you also have to explicitly contrast them with whatever people think is the "default".... and people have different defaults!
The only reliable way to get somebody to really understand something in the domain of experiential behavior, is to get them to actually do that something... which is why I'm so vocal about telling people to try things before they evaluate them, not after.
Anyway, the point was not to be critical of you, since the thing I would be critiquing is literally unavoidable. No matter what you write or say, people can project on it, and a feedback loop in the communication (plus willingness to listen on both sides) is the only way to guarantee a fix for the misunderstandings that result.
I was actually being sincere. I respect the GTD methods (even if I think they're probably on the complex side), so finding out that my understanding of a fundamental point was wrong was a valuable service.
I did briefly reflect 'hm, I wonder if this sounds sarcastic?', but I passed over it. I wonder what made it sarcastic for you? If I hadn't used the 'until the urge passes' expression? Was it the semicolon and single-paragraph?
I found it difficult to determine whether you were being sarcastic. I think the most reads-as-sarcastic part is the structure of "[In the future,] I'll [subordinate myself to you]; clearly [I am incompetent]." -- and the overall tone is rather gushingly-positive-about-criticism which is a common mode of sarcasm, i.e. "Oh, now that I've been told I'm wrong I will, of course, immediately switch over to your view of things."
A lot of Internet conversations have this problem with detecting sarcasm (or lack of it). Maybe we should start marking sarcastic statements, i.e. with the Lojban discursive je'unai ("commentary on this sentence: it's false"), pronounced jeh-who-nye.
For example:
Those root canals I had the other day were so much fun! je'unai
The standard GTD advice is that trivial things need to be done and bills need to be paid or you'll end up without electricity at home, so GTD doesn't see this as a big problem.
Certainly, it's true that a stitch in time saves nine - but knb's problem is not usefully resolved by saying it's a nonproblem.
If he can't get anything important done for the sake of the trivial, that is a very big problem.
I'm not saying it's a non-problem by any external standard, it's just that GTD assumes it's pretty much a non-problem.
A solution that I have heard work before is adding the same item multiple times. Not directly, that would be too easy, but instead, add a new task to finish an older list. The longer a task languishes, the more "tasks" you can cross off when it finally gets done.
it feels like more of a status accomplishment to reach a broader audience (y'know, one with lots of hot babes - that's why guys seek status, after all). (This is a downside to LW being a sausage-fest - less incentive for men to status-seek through community-valued accomplishments if it won't get them chicks.)
just imagine then the waves of community-valued accomplishments that could be unleashed with lesswrong-gay.com ;-)
As a counterpoint, recall the claim that a group of teenagers publicly proclaiming celibacy is effective, as long as they can view themselves as an embattled minority (less than 1/3 of the community).
That does make some intuitive sense given this result, but it also runs counter to the more general failure of celibacy pledges I recall reading about (Google fished a related story here). Can you provide a citation?
Incidentally, I find it interesting that pledge-breakers are less likely to be prepared (less use of birth control and condoms) for extramarital sex when they have it anyway. Seems like a clear example of motivational self-deception interfering with rational planning.
Incidentally, I find it interesting that pledge-breakers are less likely to be prepared (less use of birth control and condoms) for extramarital sex when they have it anyway. Seems like a clear example of motivational self-deception interfering with rational planning.
Actually, it's an example of a much more specific pattern: Robert Fritz's idea-belief-reality conflict. A social ideal (celibacy) is set up to offset a feared social anti-ideal (sinful promiscuity), setting up two opposing "interests" (in Ainslie's model of the will), one of which is identified with conscious control (celibacy) and one which is not (sex). Since the latter interest is not planned, it is satisfied hastily as soon as preference reversal occurs.
In a perverse way, setting up an ideal for one's self actually strengthens the feared desire or behavior, by making the avoided thing part of a negative self-identity. As Fritz puts it, who would create an ideal of being celibate, but someone who's afraid they won't be celibate? A person with no desire for sex has no reason to make such a big deal out if it.
The same thing applies to any pledge you make a big deal out of. Or as I like to put it, "whatever pushes you forward, holds you back." My past ideal to be organized and productive derived from a fear of being sloppy and lazy, not a desire for actual organization or productivity.
Based on this, I would indeed expect pledges of any sort to be an indicator of a strong desire to do the opposite of the pledged thing. I would expect an even stronger correlation, however, if you separated the people pledging into two groups, based on their answer to the question, "What would happen if you broke your pledge?" If the person answers that something bad will happen, I predict a higher correlation with actual failure than if they say something like, "Well, I wouldn't like it, but I would move on." The latter person is not in "push" or "ideal" territory, the former is.
This prediction is not specific to celibacy pledges, btw; I'm saying that anybody making a public pledge could be sorted into one of those two groups, with the "push" group having a distinctively higher probability of having their effort end in failure, and the other group being more likely to stick to their direction. And it's not so much a matter of my personal observation (although I certainly have observed it ) as it is a logical prediction from Seligman's research into optimism and Dweck's research into the "growth" mindset. "Something bad will happen if I fail" is not a thought engaged in much by optimists or the growth-minded, and it's optimists and growth-minded people who are most likely to succeed when a task needs sticking to.
As Fritz puts it, who would create an ideal of being celibate, but someone who's afraid they won't be celibate?
While this makes intuitive sense to me as a rhetorical question, I think one actual answer is "someone embedded in a culture that positively values celibacy pledges as status signals". It seems that more folks with innate athletic talent create and promote ideals of athletic virtue, while folks with more innate cognitive talent lean toward ideals of intellectual achievement.
That said, I'd agree that these are edge cases compared to the quantity of public pledges that do indeed appear to be fear-driven, in which case "whatever pushes you forward, holds you back" looks reasonable. Note though that the negative consequences I'm talking about aren't so much a direct result of the backfiring pledge strengthening their desire (they might easily have had sex regardless) as they are of an inaccurate belief sabotaging their ability to act rationally. Even if the pledge and subsequent belief did work to decrease the odds of extramarital sex, the net gain could well be negative if this difference doesn't outweigh the consequences of the more drastic failure modes (unwanted pregnancies and diseases).
It seems that more folks with innate athletic talent create and promote ideals of athletic virtue, while folks with more innate cognitive talent lean toward ideals of intellectual achievement.
You would think so, but a key symptom in this type of "ideal" is whether it's also Serious and Important And In Capital Letters, because that's an indication of an aversive component. People with talent usually don't elevate the subject of their talent to a Serious Ideal (as opposed to something they just think is fun and wonderful) until they develop some kind of fear about it.
And when the ideal itself is framed negatively -- celibacy, teetotaling, etc. -- one may be a bit more certain that aversion is involved. Pledging these things is likely a signaling of the form, "don't punish me for nonconformance, I am conforming and promoting conformance to tribal standards".
In any case, whether the tribe explicitly makes the ideal a goal, or if you just create it personally because of a bad experience, the same machinery and behaviors end up on the case.
From my own experience, I never thought about being "smart" until some kid bugged me about it... and then I wound up making it a part of my identity, which then had to be defended. Before that, it was not a Serious Ideal, and didn't negatively affect my self-esteem or behavior. After, it was something I had to expend lots of energy to protect and avoid challenges to.
Unfortunately, it's not always easy to know when you have one of these ideals, as the more pervasive they are, the less visible they become. And, when confronted about one, the natural response is to shy away from the subject -- after all, the ideal exists precisely so we can avoid its opposite.
This mechanism is also the root of hypocrisy - talking about an ideal frees us from having to do anything we don't actually want to, because it's really only about avoiding the opposite. Any time we don' want to do something, we can always rate it as a poor way of fulfilling the ideal, even if the act will improve* things with respect to that ideal.
Pledging these things is likely a signaling of the form, "don't punish me for nonconformance, I am conforming and promoting conformance to tribal standards".
That may well just be the evolutionary origin of the signal. I'm no ev-psych expert, but I'd be surprised if all or most signaling behavior involved fear somewhere in the brain. It seems entirely plausible to just produce a brain that wants to conform and promote conformance, given enough time to adapt.
In any case, whether the tribe explicitly makes the ideal a goal, or if you just create it personally because of a bad experience, the same machinery and behaviors end up on the case.
For non-adapted ideals (meaning the desire isn't built-in), agreed.
Unfortunately, it's not always easy to know when you have one of these ideals, as the more pervasive they are, the less visible they become.
Completely agree. About the only cue I have for noticing them is picking up on reflexive emotional reactions that seem disproportionate to their cause, but these only tell me that a background operator is acting, not necessarily much about the nature of that operator. Do you know of any others?
I'd be surprised if all or most signaling behavior involved fear somewhere in the brain.
I said it was probably for avoiding punishment; some conformance behavior is approach- rather than avoidance-driven. Ideals that you go after because you admire them, not because you'll be a bad person if you don't. Note that part of the evolutionary punishment mechanism is also punishing non-punishers... we don't generally see people being zealously evangelistic about truly positive ideals, only ones where there's punishment involved. So we tend to see most of the problems with idealism when there's an aversive component.
About the only cue I have for noticing them is picking up on reflexive emotional reactions that seem disproportionate to their cause, but these only tell me that a background operator is acting, not necessarily much about the nature of that operator. Do you know of any others?
A few off the top of my head:
The "push" test - ask what happens if you don't get the result you want. Does it make you feel bad?
The "should" test - do you find yourself angry at others or the world because things should be different?
The criticism test - are you criticizing yourself or others for not living up to some standard?
The "yes but" test - have you arrived at some conclusion that seems reasonable to you, but you respond to the idea of implementing it with "yes, but..."?
The "afraid I'm" test - how would you complete the sentence, "I'm afraid I might be...", with an emotionally-negative label?
(And yes, by the above tests, some of my not-too-long-ago comments on LW would qualify me for harboring such an ideal... which is why I took a little time off and then dropped certain subjects I was "shoulding" on, once I noticed what was happening.)
(And yes, by the above tests, some of my not-too-long-ago comments on LW would qualify me for harboring such an ideal... which is why I took a little time off and then dropped certain subjects I was "shoulding" on, once I noticed what was happening.)
What subjects were those, praytell?
Right, birth control is basically a conflict between genetic and human interests. Short-leash genetic control is situationally triggered and can be difficult for our conscious mind to admit / predict ahead of time. We think we will have self-control, but when the time comes, the short-leash modules trigger and convince us to have unprotected sex (to advance genetic interests). Making it as easy as possible to not give in to short-leash temptation is important for resisting it, which is why keeping condoms around is better than abstinence, and an IUD or injectable birth control is better yet.
Admittedly there is some rationality to the idea of not just ameliorating temptation but avoiding it. Also I suspect abstinencers have a different utility function than I do - they view sex as bad in and of itself and not just because of consequences like unwanted pregnancies. Their method makes more sense given that viewpoint. If it worked, at least.
We think we will have self-control, but when the time comes, the short-leash modules trigger and convince us to have unprotected sex (to advance genetic interests).
I'm not sure this makes sense as an evolutionary mechanism. Contraception hasn't been around long enough for it to be a selective pressure, has it?
My impression is that celibacy pledges do something, although maybe not abstinence education, but I don't trust much of what I read, since it's so politicized. The article you link to says that it doesn't prevent sex before marriage, but that's a pretty high bar. The paper I was talking about (I added a citation above) says that it delays sex by 18 months; that just doesn't get you past marriage. Here is a paper that claims pledges do nothing, that earlier results didn't control for enough.
Let's not worry about absolute effect and instead look for robust results. I think it's robust that seeking abstinence leads to less birth control and a higher ratio of STDs to pregnancies. Also, the point of my original claim, about embattled minorities, is robust to that kind of selection bias.
Patri, it sure seems to me that even funding the purchase of a used cruise-ship for medical purposes will require substantial investment, of the sort that will only be available as a result of mainstream status generated through mainstream news attention. Given that, how does such news attention not constitute progress towards your goals.
I don't associate "getting investment" and "getting mainstream news attention" so strongly. The Forbes video on SurgiCruise (http://video.forbes.com/fvn/breakout/cruising-for-surgery) may help mainstream status and fundraising, but I don't think that sort of thing is critical. And for residential, occupant-funded developments, we need committed people not general attention, which I think comes more from targeted and high-involvement marketing (like giving talks) as opposed to mass media which just raises general awareness.
Even if I'm wrong that just means my feelings towards press are right for the wrong reasons. I can feel a narcissistic attraction in publicity that is totally separate from my evaluation of its strategic impact.
I think you're grossly overestimating both the financial utility and desirability of celebrity.
Furthermore, it's far from obvious that news attention will lead to an increase in useful or positive fame.
This reminds me of the old author's adage: "Never tell anyone about the story, until you're done writing it!" Because if you do you lose motivation for writing it?
I suspect you could adapt this technique for other endeavours. Instead of telling the media about the the turbine motor which powers the tesla coils and gattling guns that you're working on, just tell them you have something really big in the mix - maintain the air of secrecy until you actually accomplish it. That way you'll have internal and external motivations in alignment.
Heck, even explain why you're not telling them everything - that'll give you deep wisdom.
Helping to rescue marine mammals is a more effective way for a straight guy to signal high status to prospective sex partners than addressing existential risks is. I always considered that a feature, not a bug, because I always thought that people doing something to signal status do not do as good a job as people motivated by altruism, a desire to serve something greater than oneself or a sense of duty -- or even people motivated by a salary.
Why are we assuming these categories are mutually exclusive? Like Will points out, if we just accept that altruism and status-seeking are inextricable then we can design societies where altruistic behavior has high status returns. I guess I don't get the usefulness of the distinction.
Status seekers probably greatly outnumber true altruists.
But you should tend to keep the status seekers out of positions of great responsibility IMHO even if doing so greatly reduces the total number of volunteers working on existential risks.
My tentative belief that status seekers will not do as good a job BTW stems from (1) first-hand observation and second-hand observation of long-term personal performance as a function of personal motivation in domains such as science-learning, programming, management and politics and (2) a result from social psychology that intrinsic reinforcers provide more reliable motivation than extrinsic reinforcers (for more about which, google "Punished by Rewards").
The last thing the future light cone needs is for existential-risk activism to become the next big thing in how to show prospective friends and prospective lovers how cool you are.
A lot more people talk about existential risks, often in a very animated way, than do anything about them.
I think probably the vast majority of people interested in existential risk want to signify both that they are good caring people, and that they are hard-headed intelligent rationalists and not the sort of muddled peace-and-love types who would go around waving "FUR IS MURDER" signs.
Probably doesn't actually work as far as getting friends and lovers is concerned, but it's a good self-signal.
Someone should document and categorize the most common signaling tropes of this community. Maybe once I get up to 40 or whatever.
Maybe altruists do a better job at the task at hand, but they do a much worse job at survival and reproduction, hence get ruthlessly selected against by evolution, hence are much rarer, hence if your cause only appeals to altruists, it will have far fewer supporters than if it appeals to status signalers too. So if your cause needs many supporters, you just have to suck it up and try to provide status.
This is a downside to LW being a sausage-fest - less incentive for men to status-seek through community-valued accomplishments if it won't get them chicks.
Actually, you can generalize that: since most internet communities are not geographically localized in meatspace (which is to say sexspace), it makes sense that such a community which happens to be full of men would not try too hard to attract female members (or impress exisiting ones), since if you live thousands of miles away from a woman, you're unlikely to have sex with her unless she's really, extraordinarily impressed.
But perhaps if LW focuses more on in-person meetups that could change.
I suspect that meetup.com's continued existence is fueled by single people trying to actually meet interesting others.
I was part of a meetup on "alternative energy" (to see if actual engineers went to the things--I didn't want to date a solar cell) when I got an all-group email from the group founder about an "event" concerning The Secret* and a great opportunity to make money. Turned out it was a "green" multi-level marketing scam he was deep in, and they were combining it with the The Secret. Being naive, at first I said I didn't think the event was appropriate, assuming it might lead to some discussion. He immediately slandered me to the group, but I managed to send out an email detailing his connections to the scam before I was banned from the group. I did get a thank you from one of the members, at least.
I looked through meetup and found many others connected to him. Their basic routine involves paying the meetup group startup cost, having a few semi-legit meetings, and then using their meetup group as a captive audience.
I admit, I was surprised. I know it's not big news, but the new social web has plenty of new social scammers, and they're running interference. It's hard to get a strong, clear message out when opportunists know how to capitalize on easy money: people wanting to feel and signal like they're doing something. I honestly don't think seasteading can even touch that audience, but then again, I'm not sure you'd want to.
[edit: changes to second paragraph]
I think that having very high standards is one of the major requirements for achieving anything substantial or great.
It'd help with the issue you describe: if you've got very high standards, it's about what you think is important, and whether you think you've done something worthwhile or not - you don't care so much for pats on the back or attention from others.
As Steve Martin says "Be so good they can’t ignore you".
The traditional wisdom says that publicly committing to a goal is a useful technique for accomplishment. It creates pressure to fulfill one's claims, lest one lose status. However, when the goal is related to one's identity, a recent study shows that public commitment may actually be counterproductive. Nyuanshin posts:
This matches my experience over the first year of The Seasteading Institute. We've received tons of press, and I've probably spent as much time at this point interacting with the media as working on engineering. And the press is definitely useful - it helps us reach and get credibility with major donors, and it helps us grow our community of interested seasteaders (it takes a lot of people to found a country, and it takes a mega-lot of somewhat interested people to have a committed subset who will actually go do it).
Yet I've always been vaguely uncomfortable about how much media attention we've gotten, even though we've just started progressing towards our long-term goals. It feels like an unearned reward. But is that bad? I keep wondering "Why should that bother me? Isn't it a good thing to be given extra help in accomplishing this huge and difficult goal? Aren't unearned rewards the best kind of rewards?" This study suggests the answer.
My original goal was to actually succeed at starting new countries, but as a human, I am motivated by the status to be won in pursuit of this goal as well as the base goal itself. I recognize this, and have tried to use it to my advantage, visualizing the joys of having achieved high status to motivate the long hours of effort needed to reach the base goal. But getting press attention just for starting work on the base goal, rather than significant accomplishments towards it, short-circuits this motivational process. It gives me status in return for just having an interesting idea (the easy part, at least for me) rather than moving it towards reality (the hard part), and helps affirm the self-image I strive for in return for creating the identity, rather than living up to it.
I am tempted to say "Well, since PR helps my goal, I shouldn't worry about being given status/identity too easily, it may be bad for my motivation but it is good for the cause", but that sounds an awful lot like my internal status craver rationalizing why I should stop worrying about getting on TV (Discovery Channel, Monday June 8th, 10PM EST/PST :) ).
My current technique is to try, inasmuch as I can, to structure my own reward function around the more difficult and important goals. To cognitively reframe "I got media attention, I am affirming my identity and achieving my goals" as "I got media attention, which is fun and slightly useful, but not currently on the critical path." To focus on achievement rather than recognition (internal standards rather than external ones, which has other benefits as well). Not only in my thoughts, but also in public statements - to describe seasteading as "we're different because we're going to actually do it", so that actual accomplishment is part of the identity I am striving for.
One could suggest that OB/LW has this problem too - perhaps rewarding Eliezer with status for writing interesting posts allows him to achieve his identity as a rationalist with work that is less useful to his long-term goals than actually achieving FAI. However, I don't buy this. I think raising the sanity waterline is a big deal, greater than FAI because it increases the resources available for dealing with FAI-like problems (ie converting a single present or future centimillionaire could lead to hiring multiple-Eliezer's worth of AI researchers). Hence his public-facing work has direct positive impact. And given this, while Eli's large audience may selfishly incent him towards public-facing work via the desire to seek status, it also increases the actual impact of his public-facing work since he reaches many people.
Also of relevance is the community in which one is achieving status. Eliezer's OB/LW audience is largely self-selected rationalists, which might be good because it's the most receptive audience, or it might be restricting his message to an unnecessarily small niche, I'm not sure. But for seasteading, I think there is a clear conflict between the most exciting and most useful audiences. What we need to succeed is a small group of highly committed and talented people, which is better served by very focused publicity, yet intuitively it feels like more of a status accomplishment to reach a broader audience (y'know, one with lots of hot babes - that's why guys seek status, after all). (This is a downside to LW being a sausage-fest - less incentive for men to status-seek through community-valued accomplishments if it won't get them chicks.)
This issue reminds me of our political system, which rewards people for believably promising to achieve great things rather than for accomplishing them. After all, which gets a Congressman more status in our society - the title of "Senator", or their voting record and the impact of the bills they helped craft and pass? Talk about image over impact!
Anyway, your thoughts on motivation, identity, public commitment, and publicity are welcomed.