LESSWRONG
LW

2030
gwern
80743Ω16391891187851
Message
Dialogue
Subscribe

https://gwern.net/

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
The Doomers Were Right
gwern7h20

Even if that were true, it might not mean anything. Why might a country not invest in Y2K prevention? Well, maybe it's not a problem there! You don't decide on investments at random, after all.

And this is clearly a case where (1) USA/Western investments would save a lot of other countries the need to invest in Y2K prevention because that is where most software comes from; and (2) those countries might not have the problem in the first place because they computerized later (and skipped the phase of hardwiring in dangerously short data types), or hadn't computerized at all. ("We don't have a Y2K problem because we don't have any computers" doesn't imply Y2K prevention is a bad idea.)

Reply
Penny's Hands
gwern2d80

The genre here is psychological horror fiction, and the style is first-person short story; so it's reminiscent of Edgar Allan Poe or Ted Chiang; but it's not clearly condensed or tightly edited the way those tend to be, and the narrator's style is prolix and euphuistic. From an editing perspective, I think the question I would have is to what extent this is a lack of editing & killing-your-darlings, and a deliberate unreliable-narrator stylistic choice in which the prose is trying to mirror the narrator's brute-force piano style or perhaps the dystonia-inducing music itself (because it can be hard to distinguish between deliberately flawed writing and just flawed writing - especially in the vastness of the Internet where there is so much flawed writing). I expect the latter given Tomas's other stories, but I think it's not sufficiently well done if I have to even ask the question. So that could probably be improve by more precise writing, or more ostentatiously musical structure, or some carefully chosen formal game; but also one could imagine making much more drastic revisions and escalating the horror in a more linguistic way, like a variant in which the narrator suffers an aphasia or dyslexia from reading a pessimized piece of text (revenge?) and the writing itself disintegrates towards the end, say.

Reply1
If Anyone Builds It Everyone Dies, a semi-outsider review
gwern3d40

Hard to say. Oyster larvae are highly mobile and move their bodies around extensively both to eat and to find places to eventually anchor to, but I don't know how I would compare that to spores or seeds, say, or to lifetime movement; and oysters "move their bodies around" and are not purely static - they would die if they couldn't open and close their shells or pump water. (And all the muscle they use to do that is why we eat them.)

Reply
In remembrance of Sonnet '3.6'
gwern3d130

Anthropic doesn't delete weights of released models

How do you know that? Because OpenAI has done that.

Reply1
Remarks on Bayesian studies from 1963
gwern4d140

Fulltext link: https://gwern.net/doc/statistics/bayes/1963-mosteller.pdf since it doesn't turn up in GS. (One of many papers I host with clean metadata and easily scraped and listed in the sitemap.xml for a long time which still doesn't turn up for unknown reasons.)

Reply2
japancolorado's Shortform
gwern4d*82

Isn't that just mode-collapse?

Reply
If Anyone Builds It Everyone Dies, a semi-outsider review
gwern5d52

Plants have many ways of moving their bodies like roots and phototropism, in addition to an infinite variety of dispersal & reproductive mechanisms which arguably are how plants 'move around'. (Consider computer programs: they 'move' almost solely by copying themselves and deleting the original. It is rare to move a program by physically carrying around RAM sticks or hard drives.) Fungi likewise often have flagellum or grow in addition to all their sporulation and their famous networks.

Reply
The Dark Arts of Tokenization or: How I learned to start worrying and love LLMs' undecoded outputs
gwern6d20

There's no reason whatsoever to think that they are independent of each other. The very fact that you can classify them systematically as 'positive' or 'negative' valence indicates they are not and you don't know what 'raw chance' here yields. It might be quite probable.

Reply
The Dark Arts of Tokenization or: How I learned to start worrying and love LLMs' undecoded outputs
gwern6d30

I don't know what the samples have to do with it. I mean simply that the parameters of the model are initialized randomly and from the start all tokens will cause slightly different model behaviors, even if the tokens are never seen in the training data, and this will remain true no matter how long it's trained.

Reply
The Dark Arts of Tokenization or: How I learned to start worrying and love LLMs' undecoded outputs
gwern6d30

Or it could just be randomness from initialization which hasn't been washed away by any training. "Subliminal" encoding immediately comes to mind.

Reply
Load More
11gwern's Shortform
Ω
5y
Ω
62
20"Tilakkhana", Gwern [poem]
4d
0
27Security Mindset: Hacking Pinball High Scores
5mo
3
215If you're not sure how to sort a list or grid—seriate it!
5mo
7
61October The First Is Too Late
5mo
10
60Ideas for benchmarking LLM creativity
10mo
11
136"Can AI Scaling Continue Through 2030?", Epoch AI (yes)
1y
4
22"On the Impossibility of Superintelligent Rubik’s Cube Solvers", Claude 2024 [humor]
1y
6
176FHI (Future of Humanity Institute) has shut down (2005–2024)
2y
22
428Douglas Hofstadter changes his mind on Deep Learning & AI risk (June 2023)?
2y
54
72COVID-19 Group Testing Post-mortem?
Q
3y
Q
6
Load More
Adaptation Executors
6 years ago
(-385)
Adaptation Executors
6 years ago
(+385)
LessWrong Presence on Reddit
6 years ago
(-22)
Simulation Hypothesis
9 years ago
(+27/-2693)
Bayesian Conspiracy
9 years ago
(+11)
Bayesian Conspiracy
9 years ago
On Designing AI (Sequence)
9 years ago
Robot
9 years ago
(-31)
Robot
9 years ago
(-119)
Robot
9 years ago
(-33)
Load More