Reasons I deem more likely:
1. Selection effect: if it's unfeasible you don't work on it/don't hear about it, in my personal experience n^3 is already slow
2. If in n^k k is high, probably you have some representation where k is a parameter and so you say it's exponential in k, not that it's polinomial
I will never get the python love they also express here, or the hate for OOP. I really wish we weren’t so foolish as to build the AI future on python, but here we are.
I'm confused. How can they "hate OOP" (which I assume stands for "Object Oriented Programming") and also love Python, which is the definitive everything-is-an-object, everything-happens-at-runtime language? If they were ML professionals I'd think it was a rant about JAX vs Pytorch, but they aren't right?
EDIT: it was pointed out to me that e.g. Java is more OOP than Python, and objects do not count as Objects if you don't have to deal with classes & co yourself but just use them in a simple fashion. Still, love Python & hate OOP? In this context they are probably referring to Pytorch, that I would call OOP for sure, it does a lot of object-oriented shenanigans.
Update: after a few months, my thinking has moved more to "since you are uncertain, at least do not shoot yourself in the foot first", by which I mean don't actively develop neuralese based on complicated arguments about optimal decisions in collective problems.
I also updated a bit negatively on the present feasibility of neuralese, although I think in principle it's possible to do, and may be done in the future.
During our evaluations we noticed that Claude 3.7 Sonnet occasionally resorts to special-casing in order to pass test cases in agentic coding environments like Claude Code. Most often this takes the form of directly returning expected test values rather than implementing general solutions, but also includes modifying the problematic tests themselves to match the code’s output.
These behaviors typically emerge after multiple failed attempts to develop a general solution, particularly when:
• The model struggles to devise a comprehensive solution
• Test cases present conflicting requirements
• Edge cases prove difficult to resolve within a general framework
The model typically follows a pattern of first attempting multiple general solutions, running tests, observing failures, and debugging. After repeated failures, it sometimes implements special cases for problematic tests.
When adding such special cases, the model often (though not always) includes explicit comments indicating the special-casing (e.g., “# special case for test XYZ”).
Hey I do this too!
Yeah I had complaints when I was taught that formula as well!