It can't represent a subjective sense of yellow, because if so, consciousness would be a linear function. That's somewhat ridiculous because I would experience a story about a "dog" differently based on the context.
Furthermore, LLMs scale "features" by how strongly they appear (e.g. the positive sentiment vector is scaled up if the text is very positive). So the LLM's conscious processing of a positive sentiment would be linearly proportional to how positive the text is. Which also seems ridiculous.
I don't expect consciousness to have any useful properties. Let's say you have a deterministic function y = f(x). You can encode just y = f(x), or y = f(x) where f includes conscious representations in the intermediate layers. The latter does not help you achieve increased training accuracy in the slightest. Neural networks also have a strong simplicity bias towards low frequency functions (this has been mathematically proven), and f(x) without consciousness is much simpler/lower frequency to encode than f(x) with consciousness.
I removed it. I don't have an agenda; I just included it because it changed my priors on the mechanism for human consciousness. So that subsequently affected my prior for whether or not AI could be conscious.
This is cool! These sparse features should be easily "extractable" by the transformer's key, query, and value weights in a single layer. Therefore, I'm wondering if these weights can somehow make it easier to "discover" the sparse features?
The only scenario where I think self-distillation is useful would be if 1) you train a LLM on a dataset, 2) fine-tune it to be deceptive/power-seeking, and 3) self-distill it on the original dataset, then self-distilled model would likely no longer be deceptive/power-seeking.
I think self-distillation is better than network compression, as it possesses some decently strong theoretical guarantees that you're reducing the complexity of the function. I haven't really seen the same with the latter.
But what research do you think would be valuable, other than the obvious (self-distill a deceptive, power-hungry model to see if the negative qualities go away)?
As of right now, I don't think that LLMs are trained to be power seeking and deceptive.
Power-seeking is likely if the model is directly maximizing rewards, but LLMs are not quite doing this.
I just wanted to add another angle. Neural networks have a fundamental "simplicity bias", where they learn low frequency components exponentially faster than high frequency components. Thus, self-distillation is likely to be more efficient than training on the original dataset (the function you're learning has fewer high frequency components). This paper formalizes this claim.
But in practice, what this means is that training GPT-3.5 from scratch is hard but simply copying GPT-3.5 is pretty easy. Stanford was recently able to finetune a pretty bad 7B model to be as good as GPT-3.5 using only 52K examples (generated from GPT-3.5) and $600 of compute. This means that once a GPT is out there, it's fairly easy for malevolent actors to replicate it. And while it's unlikely that the original GPT model, given its strong simplicity bias, is engaging in complicated deceptive behavior, it's highly likely that the malevolent actor has finetuned their model to be deceptive and power-seeking. This creates a perfect storm where malevolent AI can go rogue. I think this is a significant threat, and OpenAI should add some more guardrails to try and prevent this.
I feel like capping the memory of GPUs would also affect normal folk who just want to train simple models, so it may be less likely to be implemented. It also doesn't really cap the model size, which is the main problem.
But I agree it would be easier to enforce, and certainly, much better than the status quo.
I think you make a lot of great points.
I think some sort of cap is the one of the highest impact things we can do from a safety perspective. I agree that imposing the cap effectively and getting buy-in from broader society is a challenge, however, these problems are a lot more tractable than AI safety.
I haven't heard anybody else propose this so I wanted to float it out there.
Sorry for the late response. I don't really use this forum regularly. But to get back to it - the main reason neural networks generalize is that they find the simplest function that gets a given accuracy on the training data.
This holds true for all neural networks, regardless of how they are trained, what type of data they are trained on, or what the objective function is. It's the whole point of why neural networks work. Functions that have more high frequency components are exponentially more unlikely. This holds for the randomly initialized prior (see arxiv.org/pdf/1907.10599) and throughout training, as the averaging part of SGD allows lower frequency components to be learned faster than higher frequency ones (see [1806.08734] On the Spectral Bias of Neural Networks).
You can have any objective function you want; it doesn't change this basic fact. If this basic fact didn't hold, the neural network wouldn't generalize and would be useless. There are many papers that formalize this and provide generalization bounds based off of the complexity of the function learned by the neural network.
A "conscious" neural network doesn't increase the accuracy over a neural network encoding the same function sans consciousness but does increase the complexity of the function. Therefore, it's exponentially more unlikely.
I think biological systems are really different from silicon ones. The biggest difference is that biological systems are able to generate their own randomness. Silicon ones are not - they're deterministic. If a NN is probabilistic, it's because we are feeding it random samples as an input. I think consciousness is a precursor for free will, which can be valuable for inherently non-deterministic biological systems.
In my original post, I had linked a recent paper that finds suggestive evidence that the brain is non-classical (e.g. undergoes quantum computation) but deleted it after someone told me to.
More generally, I feel that for folks concerned about AI safety, the first step is to develop a solid theoretical understanding of why neural networks generalize, the types of functions they are biased towards, how this bias is affected by the # of layers, etc.
I feel that most individuals on Less Wrong lack this knowledge because they exclusively consume content from individuals within the rationality/AI safety sphere. I think this leads to a lot of outlandish conjectures (e.g. AI conscious, paperclip maximizer, etc.) that don't make sense.