All of DeterminateJacobian's Comments + Replies

They don't call it systematized losing, now do they?

I'd like to make the suggestion that one shouldn't begin productivity posts with statements about how the author is just so productive, especially if the aim is to help those who are struggling with being productive, and who have perhaps failed in the past. Statements like "I was not always productive before" are on the right track, but I would instead lead off with that narrative, and more fully develop it: Describe what it actually felt like to be unproductive. And then give the good reasons you found to begin anew, which may offer hope to pe... (read more)

Or what your skills are. People who are poor at soliciting the cooperation of others might begin to classify all actions which intend to change others' behavior as "blame" and thus doomed to fail, just because trying to change others' behavior doesn't usually succeed for them.

What could the woman in the harassment video do? Maybe she could start an entire organization dedicated to ending harassment, and then stay in NY as a way to signal she is refusing to let the harassers win. Or if the tradeoff isn't worth it to her personally, leave as Ada... (read more)

I like that article. For people capable of thinking about what methods make humans happy, it seems unlikely that simply performing any feel-good method will overcome barriers as difficult as what happiness means or what use is happiness anyway. They might improve one's outlook in the short term, or provide an easier platform to help answer those questions, but to me the notion that therapy works because of therapists (a sort of research supported idea if I recall correctly) corresponds well to the intuition that humans are just too wrapped up for overly ... (read more)

I've sort of internalized the idea that everything is, at least in principle, a solvable problem. And more importantly, that this corresponds without conflict to the normal way that I go around being good and human when not operating under the rationalist guise.

I'd say rationalism often takes this in-principle role in my thinking, providing a meta-level solution to extremely hard problems that my brain is already trying to solve by non-rationalist means. In the example set by the recent months in my life, I've had an extremely hard time reconciling my ... (read more)

Heh, clever. In a sense, iron has the highest entropy (atomically speaking) of any element. So if you take the claim that an aspect of solving intergalactic optimization problems involves consuming as much negentropy as possible, and that the highest entropy state of space time is low-density iron (see schminux's comment on black holes), then Clippy it is. It seems though like superintelligent anything-maximizers would end up finding even higher entropy states that go beyond the merely atomic kind.

...Or even discover ways that suggest that availability ... (read more)

2DanielLC
There is a theoretical limit on how much negentropy is required to erase a bit. However, it depends on temperature. Unless the expansion of the universe has a limit, the universe will get arbitrarily cold, and computers could be arbitrarily efficient. Theoretically, you could make a finite amount of energy last an infinite number of computations.
3rule_and_line
The last question was asked for the first time, half in jest, on May 21, 2061 ...
3dougclow
Empirically we seem to be converging on the idea that the expansion of the universe continues forever (see Wikipedia for a summary of the possibilities), but it's not totally slam-dunk yet. If there is a Big Crunch, then that puts a hard limit on the time available. If - as we currently believe - that doesn't happen, then the universe will cool over time, until it gets too cold (=too short of negentropy) to sustain any given process. A superintelligence would obviously see this coming, and have plenty of time to prepare - we're talking hundreds of trillions of years before star formation ceases. It might be able to switch to lower-power processes to continue in attenuated form, but eventually it'll run out. This is, of course, assuming our view of physics is basically right and there aren't any exotic possibilities like punching a hole through to a new, younger universe.
3[anonymous]
See also: Special Threads I share the concern about navigation and size, although lesswrong is far more legible than most long-running blogs, forums, and community projects of a similar size.

Yeah, leaving the industry is extremely common, but in my opinion not outcome-optimal for the employers who are driving their employees to extremes (or, more commonly, not encouraging normalcy). There are indeed young recruits who are willing to come in and spend huge amounts of time and effort on game projects, but there is huge variance in work quality in this group, such that the flipside of fast, positive work from a talented, unstable, young programmer is the risk of losing time on poor work from a weak, unstable, young programmer... with little way ... (read more)

In the videogame industry they have 80 hour work weeks all the time. They can get away with it because of employee turnover. If you burn out and quit, there's a line of people who'd be happy to take your place.

This does not match my experience. I've seen very few people actually work 80 hours a week more than once even in a crunch cycle, and this matches an informal survey result which shows a distribution that peaks around 44 for normal work hours and 57 during crunch (page 21).

I also haven't experienced there being a line of people ready to take t... (read more)

4cousin_it
Yeah, my "80 hours" was an overstatement. Though maybe you can replace it with 60 and the message will still stand. Indie game development isn't very profitable, and unemployed game developers can easily switch to normal software jobs instead. So for every one who starts their own indie thing, there's probably several more who leave the industry.

I was thinking about this question in regards to whether CDT agents might have a simply bayesian reason to mimic UDT agents, not only in any pre-decision signaling, but also in the actual decision. And I realized an important feature of these problems is that the game ends precisely when the agent submits a decision, which highlights the feature of UDT that distinguishes its cooperation from simple bayesian reasoning: A distinction that becomes important when you start adding qualifiers that include unknowns about other agents' source codes. The game may... (read more)

I have taken the survey, and to signal my cooperation I have upvoted every existing top-level comment here. Do unto others...