All of shawnghu's Comments + Replies

Great idea, I'm going to try this out.

I think there is a confusion in terms here.

The illustration is of a function from R to R, so in that sense it's 1-D. But the function as a vector is infinite dimensional over R.

I didn't learn about disown or nohup until recently, because there was no impetus to, because I'd been using tmux. (My workflow also otherwise depended on tmux; when developing locally I liked its method of managing terminal tabs/splits.)

The tricky thing about doom scenarios like this is that I'm not even sure that the AI is wrong.

Oops. I didn't look at the notation closely and assumed a substantially different thing based on the word "distinguishable". Oh well, I hope you guys will think my application was adequate anyway.

Is anyone else noticing that Claude (Sonnet 3.5 new, the default on claude.ai) is a lot worse at reasoning recently? In the past five days or so its rate of completely elementary reasoning mistakes, which persist despite repeated clarification in different ways, seems to have skyrocketed for me.

4cubefox
Maybe they are preparing for switching from merely encouraging their main model to do CoT (old technique) to a full RL-based reasoning model. I recently saw this, before the GUI aborted and said the model was over capacity: Then it wouldn't make sense anymore to have the non-reasoning model attempt to do CoT.

For the longest time, I would have used the convolutional architecture as an example of one of the few human-engineered priors that was still necessary in large scale machine learning tasks.

But in 2021, the Vision Transformer paper included the following excerpt: When trained on mid-sized datasets such as ImageNet without strong regularization, these models yield modest accuracies of a few percentage points below ResNets of comparable size. This seemingly discouraging outcome may be expected: Transformers lack some of the inductive biases inherent to CNNs,... (read more)

This is a good post; it articulated several of my own critiques of the ROME paper well, and furthermore, helped save me time in understanding the nuts and bolts level stuff in the paper. It was also somewhat helpful to see the results of some of the experiments you did.

I don't believe you technically mentioned this, though you mentioned many things which are conceptually similar: observing the limitations of the ROME paper made me realize that even given ideal model-editing powers, I think that the task of editing a model's understanding is underspecified:... (read more)