Does this analogy work, though?
It makes sense that you can get brand new sentences or brand new images that can even serve some purpose using ML but is it creativity? That raises the question of what is creativity in the first place and that's whole new can of worms. You give me an example of how Bing can write poems that were not in the dataset, but poem writing is a task that can be quite straightforwardly formalized, like collection of lines which end on alternating syllables or something, but "write me a poem about sunshine and butterflies" is clearly vastly easier prompt than "give me theory of everything". Resulted poem might be called creative if interpreted generously, but actual, novel scientific knowledge is a whole another level of creative, so much that we should likely put these things in different conceptual boxes.
Maybe that's just a failure of imagination on my part? I do admit that I, likewise, just really want it to be true, so there's that.
>If you get strongly superhuman LLMs, you can trivially accelerate scientific progress on agentic forms of AI like Reinforcement Learning by asking it to predict continuations of the most cited AI articles of 2024, 2025, etc.
Question that might be at the heart of the issue is what is needed for AI to produce genuinely new insights. As a layman, I see how LM might become even better at generating human-like text, might become super-duper good at remixing and rephrasing things it "read" before, but hit a wall when it comes to reaching AGI. Maybe to get genuine intelligence we need more than "predict-next-token kind of algorithm +obscene amounts of compute and human data" and mimic more closely how actual people think instead?
Perhaps local AI alarmists (it's not a pejorative, I hope? OP does declare alarm, though) would like to try persuade me otherwise, be in in their own words or by doing their best to hide condescension and pointing me to numerous places where this idea was discussed before?
"Recently, a group of Russian biohackers recently performed..."
Just reporting a little mistake here.
Good overview.
I have to admit, reading things like this I can't help but be put at ease, somewhat. I almost feel AI alarmism leaving my body.
Here's my guess why that happens: rat-sphere bloggers are the ones responsible for me treating AI threat seriously. Seeing how someone smart enough to post here get so....carried away, deciding to post it, and getting not lighthearted ridicule but upvotes and the usual "AGI around the corner" chatter reminds me that this community is still made of mere people, and worrying about AI is partly a cultural norm here, a meme. It also shifts my prior somewhat in favor of AI skepticism - if you guys can get carried away in this manner, perhaps in the AI-doom scenarios also have a critical flaw that I'm not bright enough to see? Hope so!