Hi Martin, thanks a lot for reading and for your comment! I think what I was trying to express is actually quite similar to what you write here.
'If we did they would still have different experiences, notably the experience of having a brain architecture ill-suited to operating their body.' - I agree. If I understand shard theory right, it claims that underlying brain architecture doesn't make much difference, and e.g. the experience of trying to walk in different ways, and failing at some but succeeding at others, would be enough to lead to success. ...
Thanks, I really appreciate that! I've just finished an undergrad in cognitive science, so I'm glad that I didn't make any egregious mistakes, at least.
"AGI won't be just an RL system ... It will need to have explicit goals": I agree that this if very likely. In fact, the theory of 'instrumental convergence' often discussed here is an example of how an RL system could go from being comprised of low-level shards to having higher-level goals (such as power-seeking) that have top-down influence. I think Shard Theory is correct about how very basic RL systems ...
"Something about being watched makes us more responsible ... In a pinch, placebo-ing yourself with a huge fake pair of eyes might also help."
There are 'Study with me'/'Work with me' videos on Youtube, which is usually just a few hours of someone working silently at a desk or library. I sometimes turn one of those on to give me the feeling I'm not alone in the room, raising accountability.
Great post!
I don't think people focus on language and vision because they're less boring than things like decision trees; they focus on those because the domains of language and vision are much broader than the domains decision trees, etc., are applied to. If you train a decision tree model to predict the price of a house it will do just that, whereas if you train a language model to write poetry it could conceivably write about various topics such as math, politics and even itself (since poetry is a broad scope). This is a (possibly) a step towards general intellige...
It's great to see Brittany's response was so positive, but could you still clarify if you explicitly told her you would help her learn how to cook, and/or did she ask you to do so? Or did you just infer that it's something that she would enjoy, and proceed without making it explicit?
Again, I'm happy for Tiffany's newfound cooking abilities - congratulations to her!
Thanks for the response, Martin. I'd like to try to get to the heart of what we disagree on. Do you agree that, given a sufficiently different architecture - e.g. a human who had a dog's brain implanted somehow - would grow to have different values in some respect? For example, you mention arguing persuasively. Argument is a pretty specific ability, but we can widen our field to language in general - the human brain has pretty specific circuitry for that. A dog's brain that lacks the appropriate language centers would likely never learn to speak, leave alo... (read more)