LESSWRONG
LW

231
PhilGoetz
1366424438023
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
The Unreasonable Effectiveness of Fiction
PhilGoetz8d*7-8

I know many people whose lives were radically changed by The Lord of the Rings, The Narnia Chronicles, Star Wars, or Ender's Game.

The first three spawned a vast juvenile fantasy genre which convinces people that they're in a war between pure good and pure evil, in which the moral thing to do is always blindingly obvious. (Star Wars at least had a redemption arc, and didn't divide good and evil along racial lines.  In LotR and Narnia, as in Marxism and Nazism, the only possible solution is to kill or expel every member of the evil races/classes.)  I know people on both sides of today's culture war who I believe were radicalized by Lord of the Rings.

Today's readers don't even know fantasy wasn't that way before Tolkien and Lewis!  It was adult literature, not wish-fulfilment.  Read Gormenghast, A Voyage to Arcturus, The Worm Ouroboros, or The King of Elfland's Daughter.  It often had a nihilistic or tragic worldview, but never the pablum of Lewis or Tolkien.

Ender's Game convinces people that they are super-geniuses who can turn the course of history single-handedly.  Usually this turns out badly, though it seems to have worked for Eliezer.

Reply
Cancer has a surprising amount of detail
PhilGoetz9d*20

Calling cancer a disease is like calling aging a disease. We definitely want to call it a disease, because otherwise it couldn't get federal funding.  But a doctor is unlikely to see two cancer cases in her lifetime which have exactly the same causes. Cancerous cells appear to typically have about 100 mutations, about 10 of which are likely to have collectively caused the cancer, based on analysis of the gene networks they affect. Some of the genes mutated are mutated in many cancers (eg BRCA1, p53); some are not.

The gene networks disrupted in cancer are generally related to the regulation of the cell cycle, DNA repair, or apoptosis.  Any set of mutations that damages these networks sufficiently may cause cancer, but the specific way cancer develops will depend on the precise mutations.  So when we ask "what causes cancer", we're not asking a question that has a specific answer, like "what causes AIDS"; we're asking a question which is more like asking "what causes my car to stop running". DNA damage may cause cancer, just like shooting enough bullets at your car may cause it to stop running.

Today we can distinguish cancers with about the level of resolution that we might say, "This car stopped running because its tires deflated", "This car stopped because its oil leaked out", "This car stopped because its radiator fluid leaked out."  To fix the car, you'd really like to know exactly which of many hoses, fuses, or linkages were destroyed, which is analogous to knowing exactly which genes were mutated.  (My analogy loses accuracy here because car-part networks can be more-easily disrupted, while gene networks can be more-easily pushed back into a healthy attractor by a generic up-regulation or down-regulation caused by some drug.  Also, you can't fix a car by removing all the damaged parts.)

 It's been obvious for many years that curing cancer requires personalized medicine of the kind mentioned in this post, in which what the FDA approves is an algorithm to find a custom cure for any individual, not a specific chemical or treatment.  I'm very glad to hear the FDA has taken this step.

I expect a generic algorithm to cure cancer will require cell simulation, and probably tissue and biofilm simulation to get the drugs, siRNAs, plasmids, or whatever into the right cells.

Reply
LLM-generated text is not testimony
PhilGoetz11d20

This also sounds like the stereotypical literary / genre fiction distinction.

And it sounds like the Romantic craft / art distinction.  The concepts of human creativity, and of visual art as something creative or original rather than as craftsmanship or expertise, were both invented in France and England around 1800.  Before then, for most of history in most places, there was no art/craft distinction.  A medieval court artist might paint portraits or build chairs.  As far as I've been able to determine, no one in the Western world but madmen and children ever drew a picture of an original story, which they made up themselves, before William Blake--and everybody knows he was mad.

This distinction was inverted with the modern art revolution.  The history of modern art that you'll find in books and museums today is largely bunk.  It was not a reaction to WW1 (modern art was already well-developed by 1914).  It was a violent, revolutionary, Platonist spiritualist movement, and its foundational belief was the rejection of the Romantic conception of originality and creativity as the invention of new stories, to be replaced by a return to the Platonist and post-modernist belief that there was no such thing as creativity, only divine inspiration granting the Artist direct access to Platonic forms.  Hence the devaluation of representational art, with its elevation of the creation of new narratives and new ideas, to be replaced by the elevation of new styles and new media; and also the acceptance of the revolutionary Hegelian doctrine that you don't need to have a plan to have a revolution, because construction of something new is impossible.  In Hegel, all that is possible, and all that is needed, to improve art or society, is to destroy it.  This is evident in eg Ezra Pound's BLAST! and the Dada Manifesto.  Modern artists weren't reacting to WW1; they helped start it.

References for these claims are in

The Creativity Revolution

Modernist Manifestos & WW1: We Didn't Start the Fire—Oh, Wait, we Totally Did

Some chickens will be coming home to roost now that the only part of art that AI isn't good at--that of creating new ideas and new stories that aren't just remixes of the old--is that part which modern art explicitly rejected.

Reply
LLM-generated text is not testimony
PhilGoetz11d51

That's an old game.  My first PhD advisor did nothing with my thesis chapters but mark grammatical errors in red pen and hand them back.  If your advisor isn't doing anything else for you now, he certainly won't do anything for you after you've graduated.  You may need to get a new advisor.

Reply
LLM-generated text is not testimony
PhilGoetz11d*0-9

I realize that I ignored most of the post in my comment above.  I'm going to write a sloppy explanation here of why I ignored most of it, which I mean as an excuse for my omissions, rather than as a trustworthy or well-thought-out rebuttal of it.

To me, the post sounds like it was written based on reading Hubert Dreyfus' What Computers Can't Do, plus the continental philosophy that was based on, rather than on materialism, computationalism, and familiarity with LLMs.   There are parts of it that I did not understand, which for all I know may overcome some of my objections.

  • I don't buy the vitalist assertion that there aren't live mental elements underlying the LLM text, nor the non-computationalist claim that there's no mind that is carrying out investigations.  These are metaphysical claims.
  • I very much don't buy that LLM text is not influenced by local-contextual demands from "the thought" back to the more-global contexts.  I would say that is precisely what deep neural networks were invented to do that 3-layer backprop networks don't.
  • Just give someone the prompt?  It wouldn't work, because LLMs are non-deterministic. 
    I might not be able to access that LLM.  It might have been updated.  I don't want to take the time to do it.  I just want to read the text.
  • "If the LLM text contains surprising stuff, and you DID thoroughly investigate for yourself, then you obviously can write something much better and more interesting."
    • This is not obvious, and certainly not always efficient.  Editing the LLM's text, and saying you did so, is perfectly acceptable.
    • This would be plagiarism.  Attribute the LLM's ideas to the LLM.  The fact that an LLM came up with a novel idea is an interesting fact.
    • The most-interesting thing about many LLM texts is the dialogue itself--ironically, for the same reasons Tsvi gives that it's helpful to be able to have a dialogue with a human.  I've read many transcripts of LLM dialogues which were so surprising and revelatory that I would not have believed them if I were just given summaries of them, or which were so complicated that I could not have understood them without the full dialogue.  Also, it's crucial to read a surprising dialogue yourself, verbatim, to get a feel for how much of the outcome was due to leading questions and obsequiousness.
  • But I don't buy the argument that we shouldn't quote LLMs because we can't interrogate them, because
    • it also implies that we shouldn't quote people or books, or anything except our own thoughts
    • it's similar to the arguments Plato already made against writing, which have proved unconvincing for over 2000 years
    • we can interrogate LLMs, at least more-easily than we can interrogate books, famous people, or dead people
Reply
LLM-generated text is not testimony
PhilGoetz11d2-3

We care centrally about the thought process behind words—the mental states of the mind and agency that produced the words. If you publish LLM-generated text as though it were written by someone, then you're making me interact with nothing.

 

This implies that ad hominem attacks are good epistemology.  But I don't care centrally about the thought process.  I care about the meaning of the words.  Caring about the process instead of the content is what philosophers do; they study a philosopher instead of a topic.  That's a large part of why they make no progress on any topic.

Reply
LLM-generated text is not testimony
PhilGoetz11d8-1

"Why LLM it up? Just give me the prompt." Another reason not to do that is that LLMs are non-deterministic.  A third reason is that I would have to track down that exact model of LLM, which I probably don't have a license for.  A fourth is that text storage on LessWrong.com is cheap, and my time is valuable.  A fifth is that some LLMs are updated or altered daily.  I see no reason to give someone the prompt instead of the text. That is strictly inferior in every way.

Reply
LLM-generated text is not testimony
PhilGoetz11d*195

I think that referring to LLMs at all in this post is a red herring.  The post should simply say, "Don't cite dubious sources without checking them out."  The end.  Doesn't matter whether the sources are humans or LLMs.  I consider most recent LLMs more-reliable than most people.  Not because they're reliable; because human reliability is a very low bar to clear.

The main point of my 1998 post "Believable Stupidity" was that the worst failure modes of AI dialogue are also failure modes of human dialogue.  This is even more true today.  I think humans still produce more hallucinatory dialogue than LLMs.  Some I dealt with last month:

  • the millionaire white male Ivy-league grad who accused me of disagreeing with his revolutionary anti-capitalist politics because I'm privileged and well-off, even though he knows I've been unemployed for years, while he just got his third start-up funded and was about to buy a $600K house
  • friends claiming that protestors who, on video, attacked a man from several sides before he turned on them, did not attack him, but were minding their own business when he attacked them
  • my fundamentalist Christian mother, who knows I think Christianity is completely false, keeps quoting the Psalms to me, and is always surprised when I don't call them beautiful and wise

These are the same sort of hallucinations as those produced by LLMs when some keyword or over-trained belief spawns a train of thought which goes completely off the rails of reality.

Consider the notion of "performativity", usually attributed to the Nazi activist Heidegger.  This is the idea that the purpose of much speech is not to communicate information, but to perform an action, and especially to enact an identity such as a gender role or a political affiliation.

In 1930s Germany, this manifested as a set of political questions, each paired with a proper verbal response, which the populace was trained in behavioristically, via reward and punishment.  Today in the US, this manifests as two opposing political programs, each consisting of a set of questions paired with their proper verbal responses, which are taught via reward and punishment.

One of these groups learned performativity from the Nazis via the feminist Judith Butler.  The other had already learned it at the First Council of Nicaea in 325 AD, in which the orthodox Church declared that salvation (and not being exiled or beheaded) depended on using the word homoousios instead of homoiousios, even though no one could explain the difference between them.  The purpose in all four cases was not to make an assertion which fit into a larger argument; it was to teach people to agree without thinking by punishing them if they failed to mouth logical absurdities.

So to say "We have to listen to each other's utterances as assertions" is a very Aspie thing to say today.   The things people argue about the most are not actually arguments, but are what the post-modern philosophers Derrida and Barthes called "the discourse", and claimed was necessarily hallucinatory in exactly the same way LLMs are today (being nothing but mash-ups of earlier texts).  Take a stand against hallucination as normative, but don't point to LLMs when you do it.

Reply
Should you make stone tools?
PhilGoetz2mo*90

Yeah, probably.  Sorry.

I didn't paste LLM output directly. I had a much longer interaction with 2 different LLMs, and extracted  the relevant output from different sections, combined them, and condensed it into the very short text posted.  I checked the accuracy of the main points about the timeline, but I didn't chase down all of the claims as thoroughly as I should have when they agreed with my pre-existing but not authoritative opinion, and I even let bogus citations slip by.  (Both LLMs usually get the author names right, but often hallucinate later parts of a citation.)

I rewrote the text, keeping only claims that I've verified, or that are my opinions or speculations. Then I realized that the difficult, error-laden, and more-speculative section I spent 90% of my time on wasn't really important, and deleted it.

Reply1
Should you make stone tools?
PhilGoetz3mo70

Me too!  I believe that evolution DID fix it--apes don't have this problem--and that the scrotum devolved after humans started wearing clothes.  'Coz there's no way naked men could run through the bush without castrating themselves.

Reply
Load More
Group Selection
14 years ago
(+4/-3)
Group Selection
14 years ago
(+17)
Group Selection
14 years ago
(+758)
15Good HPMoR scenes / passages?
Q
2y
Q
17
9On my AI Fable, and the importance of de re, de dicto, and de se reference for AI alignment
2y
5
0Why Bayesians should two-box in a one-shot
8y
30
14What conservatives and environmentalists agree on
9y
33
21Increasing GDP is not growth
9y
24
26Stupidity as a mental illness
9y
139
8Irrationality Quotes August 2016
9y
11
6Market Failure: Sugar-free Tums
9y
31
18"3 Reasons It’s Irrational to Demand ‘Rationalism’ in Social Justice Activism"
10y
189
30The increasing uselessness of Promoted
10y
12
Load More