How would a military which is increasingly run by AI factor into these scenarios? It seems most similar to organizational safety a la google building software with SWEs but the disanalogy might be that the AI is explicitly supposed to take over some part of the world and maybe it interpreted a command incorrectly. Or does this article only consider the AI taking over because it wanted to take over?
Huh, did you experience any side effects?
I think discernment is not essential to entertainment. If people really want to learn what a slightly off piano sounds like and also pay for expert piano tuning, then that’s fine, but I don’t think people should be looked down upon for not having that level of discernment.
How would the agent represent non-coherent others? Like humans don’t have entirely coherent goals and in cases where the agent learns that it may satisfy one or another goal, how would it select which goal to choose? Take a human attempting to lose weight, with goals to eat to satisfaction and to not eat. Would the agent give the human food or withhold it?
One thing I find weird is that most of these objects of payment are correlated. The best paying jobs also have the best peers also have the most autonomy also have the most fun. Low paid jobs were mostly drudgery along all axes in my experience
Thanks for the summary. Why should this be true?
The fact that sympathy for hedonic utilitarianism is strongly correlated with intelligence is a somewhat worrying datapoint in favor of the plausibility of squiggle-maximizers.
Embracing positive sensory experience due to higher human levels of intelligence implies a linearity that I don’t think is true among other animals. Are chimps more hedonic utilitarian than ants than bacteria? Human intelligence is too narrow for this to be evidence of what something much smarter would do
Thank you for writing this. My girlfriend and I would like kids, but I generally try not to bring AI up around her. She got very anxious while listening to an 80k hours podcast on AI and it seemed generally bad for her. I don't think any of my work will end up making an impact on AI, so I think basically the CS Lewis quote applies. Even if you know the game you're playing is likely to end, there isn't anything to do since there are no valid moves if the new game actually starts.
I did want to ask, how did you think about putting your children in school? Did you send them to a public school?
What does impossible mean in the context of clock neurons?
impossible in the first few moves.
What causes them to be unable to fire?
Based on the vibe of the post, it seems like you're trying to point at the concept of "being able to do many things". I guess generalization isn't 'for' anything, it's a concept. For an agent, generalization is a method of being able to achieve an outcome based on limited past experience without needing to waste resources figuring out strategies it could have made if only it could generalize better. I can't really tell based on what you said what I'm supposed to answer with "What does it offer you?". Like, generalization offers me the ability to recognize bad chess moves in new scenarios that I haven't seen, or it offers me the ability to take over the universe based on limited knowledge of physics. I don't know where you're trying to limit the word
I think you’re pretty severely mistaken about bullshit jobs. You said
But there are many counter examples of this not being a real concept. See here for many of them: https://www.thediff.co/archive/bullshit-jobs-is-a-terrible-curiosity-killing-concept/