ChatGPT answered with the following text. I agree with certain parts but not with all of it. For instance, I haven't seen any discussions on this topic yet.
There is no definitive answer to this question as it is an area of ongoing research and debate. However, some experts believe that the ability of humans to shift attention and forget about the previous thing may give them an advantage over LLMs when it comes to certain types of reasoning tasks. This is because humans are able to use their past experiences and knowledge to frame their understanding of the present, whereas LLMs may be more limited in their ability to use context to understand new information.
On the other hand, LLMs may be better at certain tasks because of their ability to create a more detailed simulacrum with some object permanence. For example, they may be better at remembering precise details or making connections between pieces of information that are further apart in time or space.
Overall, it's hard to compare the computational and reasoning abilities of LLMs and humans in such a general sense as it will depend on the specific task or context being considered.
Humans are constantly simulating the things around them, however they can rather easily shift attention and forget about the previous thing. So we can say humans' simulacrum does not have object permanence.
On the other hand, AI language models prompted to write down their thoughts and reasoning cannot get rid of things they don't need: that words will need to be shifted out from context window. So the simulated objects have a bit of permanence.
So, here is the question: does object permanence of simulacrum affect computational and reasoning abilities of LLMs compared to humans?