Calling cancer a disease is like calling aging a disease. We definitely want to call it a disease, because otherwise it couldn't get federal funding. But a doctor is unlikely to see two cancer cases in her lifetime which have exactly the same causes. Cancerous cells appear to typically have about 100 mutations, about 10 of which are likely to have collectively caused the cancer, based on analysis of the gene networks they affect. Some of the genes mutated are mutated in many cancers (eg BRCA1, p53); some are not.
The gene networks disrupted in cancer are generally related to the regulation of the cell cycle, DNA repair, or apoptosis. Any set of mutations that damages these networks sufficiently may cause cancer, but the specific way cancer develops will depend on the precise mutations. So when we ask "what causes cancer", we're not asking a question that has a specific answer, like "what causes AIDS"; we're asking a question which is more like asking "what causes my car to stop running". DNA damage may cause cancer, just like shooting enough bullets at your car may cause it to stop running.
Today we can distinguish cancers with about the level of resolution that we might say, "This car stopped running because its tires deflated", "This car stopped because its oil leaked out", "This car stopped because its radiator fluid leaked out." To fix the car, you'd really like to know exactly which of many hoses, fuses, or linkages were destroyed, which is analogous to knowing exactly which genes were mutated. (My analogy loses accuracy here because car-part networks can be more-easily disrupted, while gene networks can be more-easily pushed back into a healthy attractor by a generic up-regulation or down-regulation caused by some drug. Also, you can't fix a car by removing all the damaged parts.)
It's been obvious for many years that curing cancer requires personalized medicine of the kind mentioned in this post, in which what the FDA approves is an algorithm to find a custom cure for any individual, not a specific chemical or treatment. I'm very glad to hear the FDA has taken this step.
I expect a generic algorithm to cure cancer will require cell simulation, and probably tissue and biofilm simulation to get the drugs, siRNAs, plasmids, or whatever into the right cells.
This also sounds like the stereotypical literary / genre fiction distinction.
And it sounds like the Romantic craft / art distinction. The concepts of human creativity, and of visual art as something creative or original rather than as craftsmanship or expertise, were both invented in France and England around 1800. Before then, for most of history in most places, there was no art/craft distinction. A medieval court artist might paint portraits or build chairs. As far as I've been able to determine, no one in the Western world but madmen and children ever drew a picture of an original story, which they made up themselves, before William Blake--and everybody knows he was mad.
This distinction was inverted with the modern art revolution. The history of modern art that you'll find in books and museums today is largely bunk. It was not a reaction to WW1 (modern art was already well-developed by 1914). It was a violent, revolutionary, Platonist spiritualist movement, and its foundational belief was the rejection of the Romantic conception of originality and creativity as the invention of new stories, to be replaced by a return to the Platonist and post-modernist belief that there was no such thing as creativity, only divine inspiration granting the Artist direct access to Platonic forms. Hence the devaluation of representational art, with its elevation of the creation of new narratives and new ideas, to be replaced by the elevation of new styles and new media; and also the acceptance of the revolutionary Hegelian doctrine that you don't need to have a plan to have a revolution, because construction of something new is impossible. In Hegel, all that is possible, and all that is needed, to improve art or society, is to destroy it. This is evident in eg Ezra Pound's BLAST! and the Dada Manifesto. Modern artists weren't reacting to WW1; they helped start it.
References for these claims are in
Modernist Manifestos & WW1: We Didn't Start the Fire—Oh, Wait, we Totally Did
Some chickens will be coming home to roost now that the only part of art that AI isn't good at--that of creating new ideas and new stories that aren't just remixes of the old--is that part which modern art explicitly rejected.
That's an old game. My first PhD advisor did nothing with my thesis chapters but mark grammatical errors in red pen and hand them back. If your advisor isn't doing anything else for you now, he certainly won't do anything for you after you've graduated. You may need to get a new advisor.
I realize that I ignored most of the post in my comment above. I'm going to write a sloppy explanation here of why I ignored most of it, which I mean as an excuse for my omissions, rather than as a trustworthy or well-thought-out rebuttal of it.
To me, the post sounds like it was written based on reading Hubert Dreyfus' What Computers Can't Do, plus the continental philosophy that was based on, rather than on materialism, computationalism, and familiarity with LLMs. There are parts of it that I did not understand, which for all I know may overcome some of my objections.
We care centrally about the thought process behind words—the mental states of the mind and agency that produced the words. If you publish LLM-generated text as though it were written by someone, then you're making me interact with nothing.
This implies that ad hominem attacks are good epistemology. But I don't care centrally about the thought process. I care about the meaning of the words. Caring about the process instead of the content is what philosophers do; they study a philosopher instead of a topic. That's a large part of why they make no progress on any topic.
"Why LLM it up? Just give me the prompt." Another reason not to do that is that LLMs are non-deterministic. A third reason is that I would have to track down that exact model of LLM, which I probably don't have a license for. A fourth is that text storage on LessWrong.com is cheap, and my time is valuable. A fifth is that some LLMs are updated or altered daily. I see no reason to give someone the prompt instead of the text. That is strictly inferior in every way.
I think that referring to LLMs at all in this post is a red herring. The post should simply say, "Don't cite dubious sources without checking them out." The end. Doesn't matter whether the sources are humans or LLMs. I consider most recent LLMs more-reliable than most people. Not because they're reliable; because human reliability is a very low bar to clear.
The main point of my 1998 post "Believable Stupidity" was that the worst failure modes of AI dialogue are also failure modes of human dialogue. This is even more true today. I think humans still produce more hallucinatory dialogue than LLMs. Some I dealt with last month:
These are the same sort of hallucinations as those produced by LLMs when some keyword or over-trained belief spawns a train of thought which goes completely off the rails of reality.
Consider the notion of "performativity", usually attributed to the Nazi activist Heidegger. This is the idea that the purpose of much speech is not to communicate information, but to perform an action, and especially to enact an identity such as a gender role or a political affiliation.
In 1930s Germany, this manifested as a set of political questions, each paired with a proper verbal response, which the populace was trained in behavioristically, via reward and punishment. Today in the US, this manifests as two opposing political programs, each consisting of a set of questions paired with their proper verbal responses, which are taught via reward and punishment.
One of these groups learned performativity from the Nazis via the feminist Judith Butler. The other had already learned it at the First Council of Nicaea in 325 AD, in which the orthodox Church declared that salvation (and not being exiled or beheaded) depended on using the word homoousios instead of homoiousios, even though no one could explain the difference between them. The purpose in all four cases was not to make an assertion which fit into a larger argument; it was to teach people to agree without thinking by punishing them if they failed to mouth logical absurdities.
So to say "We have to listen to each other's utterances as assertions" is a very Aspie thing to say today. The things people argue about the most are not actually arguments, but are what the post-modern philosophers Derrida and Barthes called "the discourse", and claimed was necessarily hallucinatory in exactly the same way LLMs are today (being nothing but mash-ups of earlier texts). Take a stand against hallucination as normative, but don't point to LLMs when you do it.
Yeah, probably. Sorry.
I didn't paste LLM output directly. I had a much longer interaction with 2 different LLMs, and extracted the relevant output from different sections, combined them, and condensed it into the very short text posted. I checked the accuracy of the main points about the timeline, but I didn't chase down all of the claims as thoroughly as I should have when they agreed with my pre-existing but not authoritative opinion, and I even let bogus citations slip by. (Both LLMs usually get the author names right, but often hallucinate later parts of a citation.)
I rewrote the text, keeping only claims that I've verified, or that are my opinions or speculations. Then I realized that the difficult, error-laden, and more-speculative section I spent 90% of my time on wasn't really important, and deleted it.
Me too! I believe that evolution DID fix it--apes don't have this problem--and that the scrotum devolved after humans started wearing clothes. 'Coz there's no way naked men could run through the bush without castrating themselves.
I know many people whose lives were radically changed by The Lord of the Rings, The Narnia Chronicles, Star Wars, or Ender's Game.
The first three spawned a vast juvenile fantasy genre which convinces people that they're in a war between pure good and pure evil, in which the moral thing to do is always blindingly obvious. (Star Wars at least had a redemption arc, and didn't divide good and evil along racial lines. In LotR and Narnia, as in Marxism and Nazism, the only possible solution is to kill or expel every member of the evil races/classes.) I know people on both sides of today's culture war who I believe were radicalized by Lord of the Rings.
Today's readers don't even know fantasy wasn't that way before Tolkien and Lewis! It was adult literature, not wish-fulfilment. Read Gormenghast, A Voyage to Arcturus, The Worm Ouroboros, or The King of Elfland's Daughter. It often had a nihilistic or tragic worldview, but never the pablum of Lewis or Tolkien.
Ender's Game convinces people that they are super-geniuses who can turn the course of history single-handedly. Usually this turns out badly, though it seems to have worked for Eliezer.