Prof Penadés' said the tool had in fact done more than successfully replicating his research.
"It's not just that the top hypothesis they provide was the right one," he said.
"It's that they provide another four, and all of them made sense.
"And for one of them, we never thought about it, and we're now working on that."
Dr. Penadés gave the AI a prompt and it came up with four hypothesis, one which the researchers could not come up with. Is that not proof of original thought?
Pretty sure I've seen this particular case discussed here previously, and the conclusion was that actually they had published something related already, and fed it to the "co-scientist" AI. So it was synthesising/interpolating from information it had been given, rather than generating fully novel ideas.
Per NewScientist https://www.newscientist.com/article/2469072-can-googles-new-research-assistant-ai-give-scientists-superpowers/
That was concerning the main hypothesis that agreed with their work. Unknown whether the same is also true for its additional hypotheses. But I'm sceptical by default of the claim that it couldn't possibly have come from the training data, or that they definitely didn't inadvertently hint at things with data they provided.
I agree, but when people want to use the presence or absence of Original Thought™ as a criterion for judging the capabilities of AI, then drawing that line somewhere matters, and the judge should write it down, even if it is approximate.