Imitative Reinforcement Learning as an AGI Approach
I've been thinking that reinforcement-learning-driven imitation between agents may be an explanation of human intelligence, and is worth exploring more as an approach to AGI. It's difficult to get agents to exhibit the complex behaviors humans do with most optimization functions, like "acquire food". But rewarding agents for imitating each...