At the start of my Ph.D. 6 months ago, I was generally wedded to writing "good code". The kind of "good code" you learn in school and standard software engineering these days: object oriented, DRY, extensible, well-commented, and unit tested.
I think you'd like Casey Muratori's advice. He's a software dev who argues that "clean code" as taught is actually bad, and that the way to write good code efficiently is more like the way you did it intuitively before you were taught OOP and stuff. He advises "Semantic Compression" instead- essentially you just straightforwardly write code that works, then pull out and reuse the parts that get repeated.
Yeah, I think the mainstream view of activism is something like "Activism is important, of course. See the Civil Rights and Suffrage movements. My favorite celebrity is an activist for saving the whales! I just don't like those mean crazy ones I see on the news."
Pacing is a common stimming behavior. Stimming is associated with autism / sensory processing disorder, but neurotypical people do it too.
This seems too strict to me, because it says that humans aren't generally intelligent, and that a system isn't AGI if it's not a world-class underwater basket weaver. I'd call that weak ASI.
Fatebook has worked nicely for me so far, and I think it'd be cool to use it more throughout the day. Some features I'd like to see:
When I see an event with the stated purpose of opposing highly politically polarized things such as cancel culture and safe spaces, I imagine a bunch of people with shared politics repeating their beliefs to each other and snickering, and any beliefs that are actually highly controversial within that group are met with "No no, that's what they want you to think, you missed the point!" It seems possible to avoid that failure mode with a genuine truth-seeking culture, so I hope you succeeded.
It's been about 4 years. How do you feel about this now?
Bluesky has custom feeds that can bring in posts from all platforms that use the AT Protocol, but Bluesky is the only such platform right now. Most feeds I've found so far are simple keyword searches, which work nicely for having communities around certain topics, but I hope to see more sophisticated ones pop up.
While I don't have specifics either, my impression of ML research is that it's a lot of work to get a novel idea working, even if the idea is simple. If you're trying to implement your own idea, you'll be banging your head against the wall for weeks or months wondering why your loss is worse than the baseline. If you try to replicate a promising-sounding paper, you'll bang your head against the wall as your loss is worse than the baseline. It's hard to tell if you made a subtle error in your implementation or if the idea simply doesn't work for reasons you don't understand because ML has little in the way of theoretical backing. Even when it works it won't be optimized, so you need engineers to improve the performance and make it stable when training at scale. If you want to ship a working product quickly then it's best to choose what's tried and true.