Pretty sure I've seen this particular case discussed here previously, and the conclusion was that actually they had published something related already, and fed it to the "co-scientist" AI. So it was synthesising/interpolating from information it had been given, rather than generating fully novel ideas.
Per NewScientist https://www.newscientist.com/article/2469072-can-googles-new-research-assistant-ai-give-scientists-superpowers/
However, the team did publish a paper in 2023 – which was fed to the system – about how this family of mobile genetic elements “steals bacteriophage tails to spread in nature”. At the time, the researchers thought the elements were limited to acquiring tails from phages infecting the same cell. Only later did they discover the elements can pick up tails floating around outside cells, too.
So one explanation for how the AI co-scientist came up with the right answer is that it missed the apparent limitation that stopped the humans getting it.
What is clear is that it was fed everything it needed to find the answer, rather than coming up with an entirely new idea. “Everything was already published, but in different bits,” says Penadés. “The system was able to put everything together.”
That was concerning the main hypothesis that agreed with their work. Unknown whether the same is also true for its additional hypotheses. But I'm sceptical by default of the claim that it couldn't possibly have come from the training data, or that they definitely didn't inadvertently hint at things with data they provided.
Technically it's still never falsifiable. It can be verifiable, if true, upon finding yourself in an afterlife after death. But if it's false then you don't observe it being false when you cease existing.
https://en.wikipedia.org/wiki/Eschatological_verification
If we define a category of beliefs that are currently neither verifiable or falsifiable, but might eventually become verifiable if they happen to be true, but won't be falsifiable even if they're false—that category potentially includes an awful lot of invisible pink dragons and orbiting teapots (who knows, perhaps one day we'll invent better teapot detectors and find it). So I don't see it as a strong argument for putting credence in such ideas.
Looks like #6 in the TL;DRs section is accidentally duplicated (with the repeat numbered as #7)
Solid point. I realise I was unclear that for face shape I had in mind external influences in utero (while the bones of the face are growing into place in the fetus). Which would at least be a somewhat shared environment between twins. But nonetheless, changing my mind in real-time, because I would have expected more difference from one side of a womb to the other than we actually see between twins.
Even if I'm mistaken about faces though, I don't think I'm wrong about brains, or humans in general.
In other words, all the information that controls the shape of your face, your bones, your organs and every single enzyme inside them – all of that takes less storage space than Microsoft Word™.
The shape of your face, and much else besides, will be affected by random chance and environmental influences during the process of development and growth.
The eventual details of the brain, likewise, will be in large part a response to the environment—developing and learning from experience.
So the final complexity of a human being is not actually bounded by the data contained in the genome, in the way described.
I wasn't the one eating it, but having prepared a couple of Huel's "hot meal pot/pouch" options for my partner (I forget which ones exactly, but something in the way of mac & cheese or pasta bolognese), I can report that I found the smell coming off it to be profoundly unappetising.
Not sure how they went down with her, but there's a small stash of these pots in the cupboard that she hasn't touched beyond the first few—so I suspect not very well.
Slight glitches:
The "chapter shortcuts" section of https://www.lesswrong.com/s/9SJM9cdgapDybPksi lists "editPost" links to the chapter drafts (inaccessible to others)
The numbering in the post titles skip over #4
Oh I was very on board with the sarcasm. Although as a graduate of one of them, I obviously can't believe you're rating the other one so highly.
This is a general principal
Principle* — unless they're the head-teacher of a school, the type to be involved in a principal/agent problem, or otherwise the "first"
graduates of the great English universities (both of them)
Shots fired
"I am feeling stressed about buying a plane ticket" would acknowledge that the stress is coming from within you as an individual, and doesn't foreclose the possibility of instead not feeling stressed.