Thanks for responding.
I agree with what you're saying; I think you'd want to maintain your reward stream at least partially. However, the main point I'm trying to make is that in this hypothetical, it seems like you'd no longer be able to think of your reward stream as grounding out your values. Instead it's the other way around: you're using your values to dictate the reward stream. This happens in real life sometimes, when we try to make things we value more rewarding.
You'd end up keeping your values, I think, because your beliefs about what you value don't go away, and your behaviors that put them into practice don't immediately go away either, and through those your values are maintained (at least somewhat).
If you can still have values without reward signals that tell you about them, then doesn't that mean your values are defined by more than just what the "screen" shows? That even if you could see and understand every part of someone's reward system, you still wouldn't know everything about their values?
This conception of values raises some interesting questions for me.
Here's a thought experiment: imagine your brain loses all of its reward signals. You're in a depression-like state where you no longer feel disgust, excitement, or anything. However, you're given an advanced wireheading controller that lets you easily program rewards back into your brain. With some effort, you could approximately recreate your excitement when solving problems, disgust at the thought of eating bugs, and so on, or you could create brand-new responses. My questions:
My intuition says that life does not become pointless, now that you're the author of your reward stream. This suggests the values might be fictional, but the reward signals aren't the one true source—in the same way that Harry Potter could live on even if all the books were lost.
While I don't have specifics either, my impression of ML research is that it's a lot of work to get a novel idea working, even if the idea is simple. If you're trying to implement your own idea, you'll be banging your head against the wall for weeks or months wondering why your loss is worse than the baseline. If you try to replicate a promising-sounding paper, you'll bang your head against the wall as your loss is worse than the baseline. It's hard to tell if you made a subtle error in your implementation or if the idea simply doesn't work for reasons you don't understand because ML has little in the way of theoretical backing. Even when it works it won't be optimized, so you need engineers to improve the performance and make it stable when training at scale. If you want to ship a working product quickly then it's best to choose what's tried and true.
At the start of my Ph.D. 6 months ago, I was generally wedded to writing "good code". The kind of "good code" you learn in school and standard software engineering these days: object oriented, DRY, extensible, well-commented, and unit tested.
I think you'd like Casey Muratori's advice. He's a software dev who argues that "clean code" as taught is actually bad, and that the way to write good code efficiently is more like the way you did it intuitively before you were taught OOP and stuff. He advises "Semantic Compression" instead- essentially you just straightforwardly write code that works, then pull out and reuse the parts that get repeated.
Yeah, I think the mainstream view of activism is something like "Activism is important, of course. See the Civil Rights and Suffrage movements. My favorite celebrity is an activist for saving the whales! I just don't like those mean crazy ones I see on the news."
Pacing is a common stimming behavior. Stimming is associated with autism / sensory processing disorder, but neurotypical people do it too.
This seems too strict to me, because it says that humans aren't generally intelligent, and that a system isn't AGI if it's not a world-class underwater basket weaver. I'd call that weak ASI.
Fatebook has worked nicely for me so far, and I think it'd be cool to use it more throughout the day. Some features I'd like to see:
I've been trying to put all my long-form reading material in one place myself, and found a brand-new service called Reader which is designed specifically for this purpose. It has support for RSS, Newsletters, YouTube transcripts, and other stuff. $10 annually / $13 monthly.