If you're actually interested, see my Capabilities and alignment of LLM cognitive architectures. That's one we we can get from where we are, which is very limited, to "Real AGI" that will be both useful and dangerous.
This community mostly isn't worried about current AI. We're worried about future AIs.
The rate at which they get there is difficult to predict. But it's not "anyone's guess". People with more time-on-task in thinking about both current AI and what it takes to constitute for a useful, competent, and dangerous mind (e.g., human cognition or hypothetical general AI) tend to have short timelines.
We could be wrong, but assuming it's a long way off is even more speculative.
That's why we're trying to solve alignment ASAP (or, in some cases, arguing that it's so difficult that we must stop building AGI altogether). It's not clear which is the better strategy, because we haven't gotten far enough on alignment theory. That's why you see a lot of confliciting laims from well-informed people.
But dismissing the whole thing is just wishful thinking. Even when experts do it, it just doesn't make sense, because there are other experts with equally good arguments that it's deathly dangerous in the short term. Nobody knows. So seeing non-experts dismiss it because they "trust their intuitions" is somewhere between tragedy and comedy.
Not a single current "AI" can do all of it simultaneously. All of them are neuros, who can't even learn and perform over 1 task, to say nothing of escaping the power of alt+f4.
Unlike humans, machines can be extended / combined. If you have two humans, one of them is a chess grandmaster and the other is a famous poet... you have two human specialists. But if you have two machines, one great at chess and another great at poetry, you could in principle combine them to get one machine that is good at both. (You would need one central module that gives commands to the specialized modules, but that seems like something an LLM could already manage.)
LLMs can learn new things. At least in the sense that they have a long-term memory which was trained and probably cannot be updated (I don't understand in detail how these things work) but also a smaller short-term memory, where they can choose to store some information (it's basically as if the information stored there would be added to every prompt made afterwards). This feature was added recently to ChatGPT.
When an AI becomes smart enough to make or steal some money, obtain fake human credentials, rent some space in the cloud, and copy itself there, you can keep pressing alt+f4 as much as you want.
Are we there yet? No. But remember that five years ago if someone described ChatGPT, most people would laugh at them and say we wouldn't get there in hundred years.
Now my words start to sound blatant and I look like an overly confident noob, but... This phrase... Most likely is an outright lie. GPT-4 and Gemini aren't even two-task neuros. Both cannot take a picture and edit it. Instead, an image recognition neuro gives the text to the blind texting neuro, that only works with text and lacks the basic understanding of space. That neuro creates a prompt for an image generator neuro, that can't see the original image.
Ignoring for the moment the "text-to-image and image-to-text models use a shared latent space to translate between the two domains and so they are, to a significant extent operating on the same conceptual space" quibble...
GPT-4 and Gemini can both use tools, and can also build tools. Humans without access to tools aren't particularly scary on a global scale. Humans with tools can be terrifying.
"Agency" Yet no one have told that it isn't just a (quite possibly wrong) hypothesis. Humans don't work like that: no one is having any kind of a primary unchangeable goal they didn't get from learning, or wouldn't overcome for a reason. Nothing seems to impose why a general AI would, even if (a highly likely "if") it doesn't work like a human mind.
There are indeed a number of posts and comments and debates on this very site making approximately that point, yeah.
No matter the amount of gigabytes it writes per second, I'm not afraid of something that sees the world as text. +1 point to capitalism for not making something that is overly expensive to develop and may doom the world.
Take, for example, AISafety.info, that tries to explain.
Not a single current "AI" can do all of it simultaneously. All of them are neuros, who can't even learn and perform over 1 task, to say nothing of escaping the power of alt+f4.
As if other pieces of technology weren't just as "autonomous" and dangerous.
Ah, GPT-4. This neuro is recognising text. The text may be sent from an image recognition neuro, and be sent to an image generation neuro.
It still sees the world as text, and doesn't even learn from user's inputs, nor it acts without an input button being pressed. How is it dangerous? Especially since...
Now my words start to sound blatant and I look like an overly confident noob, but... This phrase... Most likely is an outright lie. GPT-4 and Gemini aren't even two-task neuros. Both cannot take a picture and edit it. Instead, an image recognition neuro gives the text to the blind texting neuro, that only works with text and lacks the basic understanding of space. That neuro creates a prompt for an image generator neuro, that can't see the original image.
Talking about the general intelligence, (with ability to learn any task), when the largest companies only lie about a TWO-task one, is what initially made me ragequit the theme.
To say nothing about that "AI cashier" fraud, and the fact openAI doesn't seem to consider AI safety (safety of a hypothetical project that isn't being developed?) all that important.
That, probably, is the question with the greatest potential to change my mind, and I must ask it the most politely. Did they make any progress since the Terminator franchise, I was rightfully told on Lesswrong not to think of as a good example?
You can't solve these, by definition. Suppose that AI accidentally develops (since not that many man-hours are trying to make it on purpose), and intelligence safety is possible through the theories and philosophy. Then, AI will be better and faster at developing "human intelligence safety" than you would be at developing safety of the said AI. The moment you decide to play this game, no move could make you closer to victory. Don't waste resources on it.
"Agency" Yet no one have told that it isn't just a (quite possibly wrong) hypothesis. Humans don't work like that: no one is having any kind of a primary unchangeable goal they didn't get from learning, or wouldn't overcome for a reason. Nothing seems to impose why a general AI would, even if (a highly likely "if") it doesn't work like a human mind.