Most existing large-scale models don't seem to have access to their outputs (except in the sense that some of their outputs show up in the inputs of the next generation of the model). Does this prevent recursive self-improvement - or at least limit the speed to the generation time between models?

Which existing models have access to their outputs? Conversational models like GTP-N do have access to their output but it goes thru a human conversational partner and are thus limited to the speed of that partner - except in the cases where two models have been hooked up to each other. Most cognitive architectures that try to model human thought have such access but simple input-output models e.g, for image generation don't.    

New Answer
New Comment