This is a special post for quick takes by intrepidadventurer. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
1 comment, sorted by Click to highlight new comments since: Today at 1:52 PM

Large Language Models Fail on Trivial Alterations to Theory-of-Mind Tasks is a paper that I recently tried to read and tried to recreate its findings and succeeded.  Whether or not LLMs have TOM feels directionally unanswerable, is this a consciousness level debate? 

However, I followed up by asking questions prompted by the phrase "explain Sam's theory of mind" which got much more cohesive answers. It's not intuitive to me yet how much order can arise from prompts. Or where the order arises from? Opaque boxes indeed.