All of tay's Comments + Replies

tay107

Kolmogorov complexity is defined relative to a fixed encoding, and yet this topic seems to be absent from the article.

Writing a solver for a system of linear equations in plain Basic would constitute a decent-size little project, while in Octave it will be a one-liner.

Taking your Tetris example, sure 6KB seems small -- as long as you restrict yourself to a space of all possible programs for Gameboy or whichever platform you took this example from. But if your goal is to encode Tetris for a computer engineer who has no knowledge about Gameboy, you will have... (read more)

1fdrocha
I don't think this is making it a fairer comparison. For bacteria, doesn't that mean you'd have to include descriptions of DNA, amino acids, proteins in general and everything known about the specific proteins used by the bacteria, etc? You quickly end up with a decent chunk of the Internet as well. Kolgomorov complexity is not about how much background knowledge or computational effort was required to produce some from first principles output. It is about how much, given infinite knowledge and time, you can compress a complete description of the output. Which maybe means it's not the right metric to use here...
3Noosphere89
I'm actually convinced that at least here, evolution mostly cannot do this, and that the ability to extract knowledge about the world and transmit it to the next generation correctly enough to get a positive feedback loop is the main reason why humanity has catapulted into the stratosphere, and it's rare for this in general to happen. More generally, I'm very skeptical of the idea that much learning happens through natural selection, and the stuff about epigenetics that was proposed as a way for natural selection to encode learned knowledge is more-or-less fictional: https://www.lesswrong.com/posts/zazA44CaZFE7rb5zg/transhumanism-genetic-engineering-and-the-biological-basis#JeDuMpKED7k9zAiYC
tay2-1

It is easier to tell apart a malicious function from a line of code, a file from a function, a repo from a file, or an app from a repo.

This paragraph does not make sense to me. (Maybe my reading comprehension is not up to the task).

 

Is the thesis that the same line of code may be malicious or not, depending on its context?

I would say that it is easier to judge the maliciousness of a single line of code than of the whole function, simply because the analysis of the whole function requires much more resources. You can rule out certain classes of threats... (read more)

2lemonhope
I should clarify that section. I meant that if you're asked to write a line of code or an app or whatever then it is easier to guess at intent/consequences for the higher level tasks. Another example: the lab manager has a better idea of what's going on than a lab assistant.
tay10

a collection of "rakes" worthy of pride

In the spirit of the postscriptum: I do not think "rakes" work in English the intended way (the word just denotes the literal rakes). Maybe "a collection of bumps and bruises worthy of pride"?

1bayesyatina
Thank you, I did so
tay63

Hey, it seems the app is getting its own rule wrong. :)

It says "Three numbers in ascending order"; however [0;0;0] is accepted as valid. It should say "in non-descending order".

[ADDED] Also indexA

shows the rule "Two odd numbers, one even; any order" but does not accept [-1, -2, -3].

3abstractapplic
Well, that's embarrassing. Fixed now; thank you.
Answer by tay76

LLMs per se are non-agentic, but it does not mean that systems built on top of LLMs cannot be agentic. The users of AI systems want them to be agentic to some degree in order for them to be more useful. E.g. if you ask your AI assistant to book tickets and hotels for your trip, you want it to be able to form and execute a plan, and unless it's an AI with a very task-specific capability of trip planning, this implies some amount of general agency. The more use you want to extract from your AI, the more agentic you likely want it to be.

Once you have a genera... (read more)

tay61

English is not my native tongue so please feel free to discount my opinion accordingly, but I feel like this usage is not unfamiliar to me, mostly from a psychology-adjacent context. I cannot readily find any references to where I've encountered it, but there is this: The Me and the Not-Me: Positive and Negative Poles of Identity (DOI 10.1007/978-1-4419-9188-1_2).

Also, google "the me" "the you" reports "About 65,500,000 results" (though they seem to consist mostly of lyrics).

1Mateusz Bagiński
(I'm not a native English speaker either) Yeah, but a conversation with a chatbot is not a psychology-adjacent context (or song lyrics), so if the model learned to put "the" before "me" and "you" from this kind of data, then inserting that into a conversation is still evidence that it was badly trained and/or fine-tuned.
tay1111

This is a very good answer, but it seems like it is not answering the original post. (Or maybe my perception is biased and I am reading something that is not there... I apologize if so).

The main point I took from the post (and with which I wholeheartedly agree, so I am not approaching this topic as rationally as I probably should), is that, when talking about "buying off" Russia with a bit of Ukrainian land, the attention somehow avoids the people living there, and what will happen to them if such a compromise was enacted.

Is there a part of the Russian und

... (read more)
tay1415

True; but I think one of the Viktoria's main points was that any "compromises" which surface in popular discussions from time to time, those that involve ceding parts of Ukraine to Russian control, will doom the people living there to the same fate people in Russia are already facing (or worse, because the regime on the newly-annexed territories will be more evil simply due to how the Russian system works).

Right now there is an opportunity to liberate the occupied territories, including Crimea, and at least the people of those lands will be saved. When con... (read more)

2ChristianKl
That's true, but at an 80% approval rating for Putin, a majority of them do like their current fate.  As far as what fate the people in Crimea want, different people in Crimea want different things. In 2014 before the invasion you had 67.9% identified as Russian, 15.7% as Ukrainians, and 12.6% Crimean Tatars. When searching in Western media we have an absence of polling data about what the people in Crimea want. If a majority of Crimeans wanted to be part of Ukraine, I think it would be likely that Western players would have commissioned those studies. What we do have is an NPR article, How People In Crimea View The Union With Russia, which says: And then list a few examples of people critical of Russian governance without saying their view is held by many Crimeans.  Dissolving the Tatar civil society organizations is certainly bad, but unfortunately, Ukraine doesn't care about freedom of association either, and outlawed parties representing the Russian-speaking population in Ukraine.  Infringing the freedoms of 13% is less bad than infringing those of 68% of Crimeans.  As far as Hizb ut-Tahrir goes, they are forbidden in Germany where I live because they advocated the usage of violence for political ends. Interestingly, Russia and Germany both outlawed the organization in 2003. I agree, that the actions of Russia are excessive and I would certainly prefer them not to throw people into prison for 10 to 20 years for belonging to Hizb ut-Tahrir. Destabilizing the regime does open up new opportunities but the way Putin is likely to react to that is to use political violence against anybody who isn't loyal but is a threat to his power. 
tay30

Could the mistakes be only a special case of a more general trait of being prone to changing its behavior for illegible reasons?

E.g. for me, the behavior in video #3 does not look like a mistake. Initially it feels like a possibly straightforward optimizing behavior, similar to the case #1, but then the object inexplicably "changes its mind", and that switches my perception of the video into an agentic picture. A mistake is only one possible interpretation; another can be the agent getting a new input (in a way unobvious to the viewer), or maybe something else going on inside the agent's "black box".

1pchvykov
ah, yes! good point - so something like the presence of "unseen causes"?  The other hypothesis the lab I worked with looked into was the presence of some 'internally generated forces' - sort of like an 'unmoved mover' - which feels similar to what you're suggesting?  In some way, this feels not really more general than "mistakes," but sort of a different route. Namely, I can imagine some internal forces guiding a particle perfectly through a maze in a way that will still look like an automaton