I don't think I'm selling what you're not buying, but correct me if I misrepresent your argument:
The post seems to assume a future version of generative AI that no longer has the limitations of the current paradigm which obligate humans to check, understand, and often in some way finely control and intervene in the output...
Depending on your quality expectations, even existing GenAI can make good-enough content that would otherwise have required nontrivial amounts of human cognitive effort.
...but where that tech is somehow not reliable and independent e
For what it's worth, I think even current, primitive-compared-to-what-will-come LLMs sometimes do a good job of (choosing words carefully here) compiling information packages that a human might find useful in increasing their understanding. It's very scattershot and always at risk of unsolicited hallucination, but in certain domains that are well and diversely represented in the training set, and for questions that have more or less objective answers, AI can genuinely aid insight.
The problem is the gulf between can and does. For reasons elaborated in...
To clarify: I didn't just pick the figures entirely at random. They were based on the below real-world data points and handwavy guesses.
Agreed, largely.
To clarify, I'm not arguing that AI can't surpass humanity, only that there are certain tasks for which DNNs are the wrong tool and a non-AI approach is and possibly always will be preferred.
An AI can do such calculations the normal way if it really needs to carry them out
This is a recapitulation of my key claim: that any future asymptotically powerful A(G)I (and even some current ChatGPT + agent services) will have non-AI subsystems for tasks where precision or scalability is more easily obtained by non-AI means, and that there will probably always be some such tasks.
Plucked from thin air, to represent the (I think?) reasonably defensible claim that a neural net intended to predict/synthesise the next state (or short time series of states) of an operating system would need to be vastly larger and require vastly more training than even the most sophisticated LLM or diffusion model.
Is a photographer "not an artist" because the photos are actually created by the camera?
This can be dispensed with via Chalmers' and Clarke's Extended Mind thesis. Just as a violinist's violin becomes the distal end of their extended mind, so with brush and painter, and so with camera and photographer.
As long as AI remains a tool and does not start to generate art on its own, there will be a difference between someone who spends a lot of time carefully crafting prompts and a random bozo who just types "draw me a masterpiece"
I'm not as optimistic as you abo...
Agreed. However,
I like this.
It feels related to the assertion that DNNs can only interpolate between training data points, never extrapolate beyond them. (Technically they can extrapolate, but the results are hilarious/nonsensical/bad in proportion to how far beyond their training distribution they try to go.)
Here's how I see your argument 'formalised' in terms of the two spaces (total combinatorial phase space and a post-threshold GenAI's latent space over the same output length), please correct anything you think I've got wrong:
A model can only be trained on what alread...
Cute! But how does each 16-digit 'image location' value (of 10^16 in total) uniquely represent one of the 4096^266240 possible images?
Very interesting article. Most of my objections have been covered by previous commentators, except:
1a. Implicit in the usual definition of the word 'simulation' is approximation, or 'data compression' as Michaël Trazzi characterises it. It doesn't seem fair to claim that a real system and its simulation are identical but for the absence of consciousness in the latter, if the latter is only an approximation. A weather forecasting algorithm, no matter how sophisticated and accurate, will never be as accurate as waiting to see what the real weather does, beca...
(Your response and arguments are good, so take the below in a friendly and non-dogmatic spirit)
Good enough for time-pressed people (and lazy and corrupt people, but they're in a different category) to have a black-box system do things for them that they might, in the absence of the black-box system, have invested effort to do themselves, and as an intended or unintended result, increased their understanding, opening up new avenues of doing and understanding.
I'm pretty sure we're cur... (read more)