I'd like to make the suggestion that one shouldn't begin productivity posts with statements about how the author is just so productive, especially if the aim is to help those who are struggling with being productive, and who have perhaps failed in the past. Statements like "I was not always productive before" are on the right track, but I would instead lead off with that narrative, and more fully develop it: Describe what it actually felt like to be unproductive. And then give the good reasons you found to begin anew, which may offer hope to pe...
Or what your skills are. People who are poor at soliciting the cooperation of others might begin to classify all actions which intend to change others' behavior as "blame" and thus doomed to fail, just because trying to change others' behavior doesn't usually succeed for them.
What could the woman in the harassment video do? Maybe she could start an entire organization dedicated to ending harassment, and then stay in NY as a way to signal she is refusing to let the harassers win. Or if the tradeoff isn't worth it to her personally, leave as Ada...
I like that article. For people capable of thinking about what methods make humans happy, it seems unlikely that simply performing any feel-good method will overcome barriers as difficult as what happiness means or what use is happiness anyway. They might improve one's outlook in the short term, or provide an easier platform to help answer those questions, but to me the notion that therapy works because of therapists (a sort of research supported idea if I recall correctly) corresponds well to the intuition that humans are just too wrapped up for overly ...
I've sort of internalized the idea that everything is, at least in principle, a solvable problem. And more importantly, that this corresponds without conflict to the normal way that I go around being good and human when not operating under the rationalist guise.
I'd say rationalism often takes this in-principle role in my thinking, providing a meta-level solution to extremely hard problems that my brain is already trying to solve by non-rationalist means. In the example set by the recent months in my life, I've had an extremely hard time reconciling my ...
Heh, clever. In a sense, iron has the highest entropy (atomically speaking) of any element. So if you take the claim that an aspect of solving intergalactic optimization problems involves consuming as much negentropy as possible, and that the highest entropy state of space time is low-density iron (see schminux's comment on black holes), then Clippy it is. It seems though like superintelligent anything-maximizers would end up finding even higher entropy states that go beyond the merely atomic kind.
...Or even discover ways that suggest that availability ...
Yeah, leaving the industry is extremely common, but in my opinion not outcome-optimal for the employers who are driving their employees to extremes (or, more commonly, not encouraging normalcy). There are indeed young recruits who are willing to come in and spend huge amounts of time and effort on game projects, but there is huge variance in work quality in this group, such that the flipside of fast, positive work from a talented, unstable, young programmer is the risk of losing time on poor work from a weak, unstable, young programmer... with little way ...
In the videogame industry they have 80 hour work weeks all the time. They can get away with it because of employee turnover. If you burn out and quit, there's a line of people who'd be happy to take your place.
This does not match my experience. I've seen very few people actually work 80 hours a week more than once even in a crunch cycle, and this matches an informal survey result which shows a distribution that peaks around 44 for normal work hours and 57 during crunch (page 21).
I also haven't experienced there being a line of people ready to take t...
I was thinking about this question in regards to whether CDT agents might have a simply bayesian reason to mimic UDT agents, not only in any pre-decision signaling, but also in the actual decision. And I realized an important feature of these problems is that the game ends precisely when the agent submits a decision, which highlights the feature of UDT that distinguishes its cooperation from simple bayesian reasoning: A distinction that becomes important when you start adding qualifiers that include unknowns about other agents' source codes. The game may...
They don't call it systematized losing, now do they?