Cosmia_Nebula

Wiki Contributions

Comments

Sorted by

Edit: I found it. It's from Yurchak, Alexei. "Soviet hegemony of form: Everything was forever, until it was no more." Comparative studies in society and history 45.3 (2003): 480-510.

The following examples are taken from a 1977 leading article, "The Ideological Conviction of the Soviet Person" (Ideinost' sovetskogo cheloveka, Pravda, July 1, 1977). For considerations of space, I will limit this analysis to two generative principles of block-writing: the principle of complex modification and that of complex nominalization. The first sentence in the Pravda text reads: "The high level of social consciousness of the toilers of our country, their richest collective experience and political reason, manifest themselves with an exceptional completeness in the days of the all-people discussion of the draft of the Constitution of the USSR." I have italicized phrases that are nouns with complex modifiers that function as "building blocks" of ideological discourse.

"the high level of consciousness of the toilers," the double-modifier "high level" conveys not only the claim that the Soviet toilers' consciousness exists (to be high it must exist), but also that it can be measured comparatively, by different "levels." The latter claim masks the former one, thereby making it harder to question directly...

during the 1960s and 1970s... nominal structures increased and new long nominal chains were created. This increased the circularity of ideological discourse. In the excerpt from the same 1977 leading article; the italicized phrase (which in English translation is broken into two parts) is a block of multiple nominals: "The spiritual image of the fighter and creator of the citizen of the developed socialist society reveals itself to the world in all its greatness and beauty both in the chiseled lines of the outstanding document of the contemporary times, and in the living existence, in the everyday reality of the communist construction (I v chekannykh strokakh vy- daiushchegosia dokumenta sovremennosti, i v zhivoi deistvitel'nosti, v povsed-nevnykh budniakh kommunisticheskogo stroitel'stva raskryvaetsia pered mirom vo vsem velichii i krasote dukhovnyi obraz bortsa i sozidatelia, grazhdanina razvitogo sotsialisticheskogo obshchestva)."

nominals allow one to render ideological claims implicit, masking them behind other ideas, and therefore rendering them less subject to scrutiny or multiple interpretations. This nominal chain can be deconstructed into several corresponding verbal phrases, each containing one idea: "the citizen of the developed socialist society is a fighter and creator," "the fighter and creator possesses a spiritual image," "the spiritual image is great and beautiful," etcetera. Converting these verbal phrases into one nominal phrase converts claims into presuppositions, presenting ideas as pre-established facts.

the 1970s discourse was special: its sentences contained particularly long nominal chains and only one verb, often simply a copula, with the sole purpose of turning these long chains of nominals into a sentence. This style created a notoriously "wooden" sound, giving ideological discourse its popular slang name, "oak language" (dubovyi iazyk).

A few more examples:

  • "Is this gluten-free?" (If we allow "gluten-free" we would allow "Every room is John-free." and of course "Grass is edibility-free." and very quickly Abs-E is trivial.)
    • Attempt: "This product contains rice flour, corn starch, tapioca flour, and salt." but that just prompts the further question "Does any of those contain gluten?" ...
  • Wittgenstein interrupted: "What can be said at all can be said clearly, and what we..."
  • "I think not all swans are white, and if we look for it we will find one that is not white."
    • Attempt: "There exists a swan that is ..." Blue? Green? Red? I can't say "non-white". I also can't just list every color.
  • "I don't believe in magic."
    • I don't even know how to start converting this to a positive statement.

It seems you are hitting against the expressive limits of Existential Positive First-Order Logic. It seems that they are exponentially less powerful than first order logic, in the following sense:

every existential positive first-order sentence can be transformed in an equivalent one in prenex normal form without an exponential blowup, thanks to the absence of universal quantifiers and negation symbols.

Bodirsky, Manuel, Miki Hermann, and Florian Richoux. "Complexity of existential positive first-order logic." Journal of Logic and Computation 23.4 (2013): 753-760.

It seems to me that chaos control and anti-control is another non-application.

[Handbook of Chaos Control: Schöll, Eckehard, Schuster, Heinz Georg](https://www.amazon.com/Handbook-Chaos-Control-Eckehard-Sch%C3%B6ll/dp/3527406050)

Do you have a citation for the claim that Gemini 1.0 Ultra trained for 1e26 FLOPs? I had searched all around but can't find any information on its compute cost.

Answer by Cosmia_Nebula51

This is not an answer to the broader question, but just regarding the "no Wikipedia page" thing.

I would like to write a Wikipedia page about Flux, but as it is, there is very little quality information about it. We have a lot of anecdotal information about how to use it, and a little academic description of it, but that's not enough.

Besides, it seems everyone who can write well in artificial intelligence wants to write their damned academic blog that is read by like 10 people a month and not Wikipedia, and Wikipedia accumulates a large amount of badly written stuff by amateurs.

As an example, see this page

https://en.wikipedia.org/wiki/Generative_adversarial_network

The "Applications" section is a typical example of how stupid and badly formatted it is. Everything above it I wrote myself. Everything below it I only did a light amount of editing. Before I went in to write all of that in 2022-07 (2022! Imagine that! GANs were famous since about 2018 and it waited until 2022 to get a decent Wikipedia page?), the entire page was crap like it: https://en.wikipedia.org/w/index.php?title=Generative_adversarial_network&oldid=1096565363

Similarly for the Transformer. https://en.wikipedia.org/w/index.php?title=Transformer_(deep_learning_architecture)&oldid=1095579622 I have only recently finished writing it. https://en.wikipedia.org/wiki/Transformer_(deep_learning_architecture) and then I tried applying for "Good Article" status, and got promptly rejected for not putting enough inline citations (do they really want me to put inline citations everywhere even if that means I just have to refer to the Attention is All You Need paper 30 times?) and too much primary literature and too much arXiv links (not a peer-reviewed source).

The RNN page is also terrible https://en.wikipedia.org/w/index.php?title=Recurrent_neural_network&oldid=1214097285 until I cleaned it up. There is still a large amount of crud but I put all of them in the lower half of the page, so that people know when to stop reading. I put them there just in case some annoyed editor reverts my edit for deleting their favorite section, and in case there is something valuable there (that I can't be bothered to figure out, because of how badly written it is).

The list of crud goes on and on. The Convolutional Neural Network page is still absolutely terrible. It has a negative amount of value, and I'm too tired to clean it up.

Sometimes there's an important model that's entirely neglected. Like the T5 model series. https://en.wikipedia.org/wiki/T5_(language_model) Why this model had to wait until me in 2024 to finally write it, I have no idea.

P.S.: The damned Transformer page gets someone (always a different one) writing in some Schmidhuber-propaganda. I remove it once a month. Why there are so many fans of Schmidhuber, I have no idea.

there's the Schmidhuber Scholarpedia articles in some cases, but aside from being outdated, it's, well, Schmidhuber.

I hate Schmimdhuber with a passion because I can smell everything he touches on Wikipedia and they are always terrible.

Sometimes when I read pages about AI, I see things that almost certainly came from him, or one of his fans. I struggle to speak of exactly what Schmidhuber's kind of writing gives, but perhaps this will suffice: "People never give the right credit to anything. Everything of importance is either published by my research group first but miscredited to someone later, or something like that. Deep Learning? It's done not by Hinton, but Amari, but not Amari, but by Ivanenkho. The more obscure the originator, the better, because it reveals how bad people are at credit assignment -- if they were better at it, the real originators would not have been so obscure."

For example, LSTM is actually originated by Schmidhuber... and actually, it's also credited to Schmidhuber (... or maybe Hochreiter?). But then GAN should be credited to Schmidhuber, and also Transformers. Currently he (or his fans) kept trying to put the phrase "internal spotlights of attention" into the Transformer page, and I kept removing it. He wanted the credit so much that he went for argument-by-punning, renaming "fast weight programmer" to "linear transformers", and to quote out of context "internal spotlights of attention" just to fortify the argument with a pun! I can do puns too! Rosenblatt (1962) even wrote about "back-propagating errors" in an MLP with a hidden layer. So what?

I actually took Schmidhuber's claim seriously and carefully rewrote of Ivanenkho's Group method of data handling, giving all the mathematical details, so that one may evaluate it for itself instead of Schmidhuber's claim. A few months later someone manually reverted everything I wrote! What does it read like according to a partisan of Ivanenkho?

The development of GMDH consists of a synthesis of ideas from different areas of science: the cybernetic concept of "black box" and the principle of successive genetic selection of pairwise features, Godel's incompleteness theorems and the Gabor's principle of "freedom of decisions choice", and the Beer's principle of external additions. GMDH is the original method for solving problems for structural-parametric identification of models for experimental data under uncertainty... Since 1989 the new algorithms (AC, OCC, PF) for non-parametric modeling of fuzzy objects and SLP for expert systems were developed and investigated. Present stage of GMDH development can be described as blossom out of deep learning neuronets and parallel inductive algorithms for multiprocessor computers.

Well excuse me, "Godel's incompleteness theorems"? "the original method"? Also, I thought "fuzzy" has stopped being fashionable since 1980s. I actually once tried to learn fuzzy logic and gave up after not seeing what is the big deal. It is filled with such pompous and self-important terminology, as if the lack of substance must be made up by the heights of spiritual exhortation. Why say "combined" when they could say "consists of a synthesis of ideas from different areas of science"?

As a side note, such turgid prose, filled with long noun-phrases is pretty common among the Soviets. I once read that this kind of massive noun-phrase had a political purpose, but I don't remember what it is.

Finally somepony noticed my efforts!

Size: 1000x1044 | Tagged: safe, artist:hidden-cat, twilight sparkle, g4, crying, female, japanese, senpai, solo

Concurring with the sentiment, I have realized that nothing I write is going to be as well-read as Wikipedia, so I have devoted myself to writing Wikipedia instead of trying to get a personal blog anymore.

I will comment on a few things:

  1. I really want to get the neural scaling law page working with some synthesis and updated data, but currently there are no good theoretical synthesis. Wikipedia isn't good for just a giant spreadsheet.
  2. I wrote most of the GAN page, the Diffusion Model page, Mixture of Experts, etc. I also wrote a few sections of LLM and keep the giant table updated for each frontier model. I am somewhat puzzled by the fact that it seems I am the only pony who thought of this. There are thousands of ML personal blogs, all in the Celestia-forsaken wasteland of not getting read, and then there is Wikipedia... but nopony is writing there? Well, I guess my cutie mark is in Wikipedia editing.
  3. The GAN page and the Diffusion Model page were Tirek-level bad. They read like somepony paraphrased about 10 news reports. There was barely a single equation, and that was years after GAN and DM had proved their worth! So I fired the Orbital Friendship Mathematical Cannon. I thought that if I'm not going to write another blog, then Wikipedia has to be on the same level of a good blog, so I set my goal to the Lilian Wang's blog level, and a lack of mathematics is definitely bad.
  4. I fought a bitter edit war on Artificial intelligence in mathematics with an agent of Discord [deletionist] and lost. The edit war seems lost too, but a brief moment is captured in Internet Archive... like tears in the rain. I can only say like Galois... "On jugera [Posterity will judge]".
  5. My headcanon is that Smokefoot is a member of BloodClan.