I'd like to make the suggestion that one shouldn't begin productivity posts with statements about how the author is just so productive, especially if the aim is to help those who are struggling with being productive, and who have perhaps failed in the past. Statements like "I was not always productive before" are on the right track, but I would instead lead off with that narrative, and more fully develop it: Describe what it actually felt like to be unproductive. And then give the good reasons you found to begin anew, which may offer hope to people still struggling. The last thing you want to do is make the reader feel like they are in a different reference class than you, and begin to feel like your suggestions can't apply no matter how much effort they put in.
I get this post is meant to be an introduction for beginners who maybe are learning these concepts for the first time, and there's a difference between learning productivity and combating akrasia. I'd just like to suggest there's a much bigger inferential gap across the webs than with your friends you've tried this with personally, and so perhaps it would be helpful to write more narratively than prescriptively.
Or what your skills are. People who are poor at soliciting the cooperation of others might begin to classify all actions which intend to change others' behavior as "blame" and thus doomed to fail, just because trying to change others' behavior doesn't usually succeed for them.
What could the woman in the harassment video do? Maybe she could start an entire organization dedicated to ending harassment, and then stay in NY as a way to signal she is refusing to let the harassers win. Or if the tradeoff isn't worth it to her personally, leave as Adams suggests. She isn't making it Scott Adam's problem, she's making it the problem of anybody who actually wants it to also be their problem. That's how cooperation works, and people can be good or bad at encouraging cooperation, in completely measurable ways. Assigning irremediable blame, or refusing to encourage change at all are both losing solutions.
I like that article. For people capable of thinking about what methods make humans happy, it seems unlikely that simply performing any feel-good method will overcome barriers as difficult as what happiness means or what use is happiness anyway. They might improve one's outlook in the short term, or provide an easier platform to help answer those questions, but to me the notion that therapy works because of therapists (a sort of research supported idea if I recall correctly) corresponds well to the intuition that humans are just too wrapped up for overly easy feel-good solutions to work. (This is as opposed to psychiatric solutions to psychiatric issues, for which you should be following this algorithm if you're depressed).
I've had trouble with the notion that happiness is even a goal to be strived for at all, because of the self-referential reality that a really good way to become happy is to become less self-focused, but that thinking about being happy is sort of self-focused. In that sense, I'd much rather seek out "fulfillment" or "goodness" than "happiness," but I now think that my issue here is just an artifact of the language of people using the word "happy." That word is just too wrapped up in ideas that make it out to be something like wireheading, which as we know is something that nobody actually wants. And so while I do think people looking for X often stop short with not-very-desirable things, it's good to separate this from people who actually want to be the most good kind of happy, the kind that one would always want, and maybe still even call "happy."
I've sort of internalized the idea that everything is, at least in principle, a solvable problem. And more importantly, that this corresponds without conflict to the normal way that I go around being good and human when not operating under the rationalist guise.
I'd say rationalism often takes this in-principle role in my thinking, providing a meta-level solution to extremely hard problems that my brain is already trying to solve by non-rationalist means. In the example set by the recent months in my life, I've had an extremely hard time reconciling my knowledge that I'm the one to blame for all of my problems, with the idea that I shouldn't feel guilty for not being perfect at solving all of my problems. This is a very human question that's filled to the brim with mental pitfalls, but I've been able to make a little progress by recognizing that, by definition and method, instrumental rationality is actually equivalent to making myself good and awesome, whether or not the on-paper rationalist method is the one my brain is using most of the time. I'm better capable of realizing that the human inability to be theoretically optimal is subsumed by human rationality, that the only optimal that exists is the kind that is actual, and that all that is left to do is take the infinite stream of possible self-improvements you can think of and start checking them off the list.
And so, when faced with something that seems next to impossible to solve (e.g. finding somebody to love ) there's no reason to blame the world, myself, or my proclivity to blame the world or myself. There's only the chance to do the most possible fun thing, which is to enjoy the journey of being myself, where myself is defined as someone who ceaselessly self-improves, even when that means putting less pressure on myself to improve on the object level.
For a while the "weirdness" of Less Wrong made me want to shy away from really engaging with the people here, but I'd love for that to change. If everything is a solvable problem, and we only want to solve things that are problems, then either Less Wrong is just fine (and I can improve my perception), or it is sort of actually weird but can be improved. And I wouldn't mind contributing wherever this is possible.
Heh, clever. In a sense, iron has the highest entropy (atomically speaking) of any element. So if you take the claim that an aspect of solving intergalactic optimization problems involves consuming as much negentropy as possible, and that the highest entropy state of space time is low-density iron (see schminux's comment on black holes), then Clippy it is. It seems though like superintelligent anything-maximizers would end up finding even higher entropy states that go beyond the merely atomic kind.
...Or even discover ways that suggest that availability of negentropy is not an actual limiter on the ability to do things. Does anyone know the state of that argument? Is it known to be true that the universe necessarily runs out of things for superintelligences to do because of thermodynamics?
Yeah, leaving the industry is extremely common, but in my opinion not outcome-optimal for the employers who are driving their employees to extremes (or, more commonly, not encouraging normalcy). There are indeed young recruits who are willing to come in and spend huge amounts of time and effort on game projects, but there is huge variance in work quality in this group, such that the flipside of fast, positive work from a talented, unstable, young programmer is the risk of losing time on poor work from a weak, unstable, young programmer... with little way to know the difference. A senior engineer at 40 hours is substantially better than a junior engineer at 80 hours, and so companies probably should invest more to keep talent on future projects, which is of course hard to do when crunch rolls around on the current project.
I used to hear this saying around the industry: "When you're on the outside of games, it seems like nobody is looking to hire good talent. When you're on the inside, it seems like there's nobody talented applying."
In the videogame industry they have 80 hour work weeks all the time. They can get away with it because of employee turnover. If you burn out and quit, there's a line of people who'd be happy to take your place.
This does not match my experience. I've seen very few people actually work 80 hours a week more than once even in a crunch cycle, and this matches an informal survey result which shows a distribution that peaks around 44 for normal work hours and 57 during crunch (page 21).
I also haven't experienced there being a line of people ready to take the place of turned over employees. Maybe there are many who would be happy to, but not nearly enough are talented enough to. The game industry has a market for employee talent just like any other, and it were ever the case that there was mass unemployment of qualified game developers, you would see those developers starting companies (as we are actually seeing recently).
I was surprised to see that, according to the same study above, the median tenure in the game industry is only 2 years, which you may have to take with the caveat it doesn't give details about how contractors and starving indies were meant to respond to the question. The median tenure for the entire U.S. economy is 4.4, but possibly the better comparison is for 25-34 year-olds, for whom the median is 3.0.
I was thinking about this question in regards to whether CDT agents might have a simply bayesian reason to mimic UDT agents, not only in any pre-decision signaling, but also in the actual decision. And I realized an important feature of these problems is that the game ends precisely when the agent submits a decision, which highlights the feature of UDT that distinguishes its cooperation from simple bayesian reasoning: A distinction that becomes important when you start adding qualifiers that include unknowns about other agents' source codes. The game may have as many confounders and additional decision steps before the final step, but UDT is exclusively the feature that allows cooperation on that final step.
They don't call it systematized losing, now do they?