I won't give why I think this, but I'll give another reason that should make you more seriously consider this: their sample complexity sucks.
Just think of anything that you've wanted to use a gippity to understand, but it didn't quickly work and you tried to ask it followup questions and it didn't understand what was happening / didn't propagate propositions / didn't clarify / etc.
For its performances, current AI can pick up to 2 of 3 from:
AlphaFold's outputs are interesting and superhuman, but not general. Likewise other Alphas.
LLM outputs are a mix. There's a large swath of things that it can do superhumanly, e.g. generating sentences really fast or various kinds of search. Search is, we could say, weakly novel in a sense; LLMs are superhumanly fast at doing a form of search which is not very reflective of general understanding. Quickly generating poems with words that all start with the letter "m" or very quickly and accurately answering stereotyped questions like analogies is superhuman, and reflects a weak sort of generality, but is not interesting.
ImageGen is superhuman and a little interesting, but not really general.
Many architectures + training setups constitute substantive generality (can be applied to many datasets), and produce interesting output (models). However, considered as general training setups (i.e., to be applied to several contexts), they are subhuman.
It's straightforward to disprove: they should be able to argue for their views in a way that stands up to scrutiny.
I'd like to see more intellectual scenes that seriously think about AGI and its implications. There are surely holes in our existing frameworks, and it can be hard for people operating within them to spot. Creating new spaces with different sets of shared assumptions seems like it could help.
Absolutely not, no, we need much better discovery mechanisms for niche ideas that only isolated people talk about, so that the correct ideas can be formed.
Thank you for writing this!
Hm. I super like the notion and would like to see it implemented well. The very first example was bad enough to make me lose interest: https://russellconjugations.com/conj/1eaace137d74861f123219595a275f82 (Text from https://www.thenewatlantis.com/publications/the-anti-theology-of-the-body)
So I tried the same thing but with more surrounding text... and it was much better!... though not actually for the subset I'd already tried above. https://russellconjugations.com/conj/3a749159e066ebc4119a3871721f24fc
A longer sentence is produced by, and is asking the reader to be, putting more things together in the same [momentary working memory context]. Has advantages and disadvantages, but is not the same.
These arguments are so nonsensical that I don't know how to respond to them without further clarification, and so far the people I've talked to about them haven't provided that clarification. "Programming" is not a type of cognitive activity any more than "moving your left hand in some manner" is. You could try writing out the reasoning, trying to avoid enthymemes, and then I could critique it / ask followup questions. Or we could have a conversation that we record and publish.