OneManyNone

I am a Computer Science PhD who has worked in Machine Learning at both Amazon and Google Brain.

I have a blog at https://onemanynone.substack.com/ where I publish posts aimed at a broader and less technical audience.

Wiki Contributions

Comments

Sorted by

I feel as if I can agree with this statement in isolation, but can't think of a context where I would consider this point relevant.

I'm not even talking about the question of whether or not the AI is sentient, which you asked us to ignore. I'm talking about how do we know that an AI is "suffering," even if we do assume it's sentient. What exactly is "suffering" in something that is completely cognitively distinct from a human? Is it just negative reward signals? I don't think so, or at least if it was, that would likely imply that training a sentient AI is unethical in all cases, since training requires negative signals.

That's not to say that all negative signals are the same or that maybe in some contexts it's painful or not, just that I think determining that is an even harder problem than determining if the AI is sentient.

Fair enough. But for the purposes of this post, the point is that capability increased without increased compute. If you prefer, bucket it as "compute" vs "non-compute" instead of "compute" vs "algorithmic".

I think whether or not it's trivial isn't the point: they did it, it worked, and they didn't need to increase the compute to make it happen.

I agree. I made this point and that is why I did not try to argue that LLMs did not have qualia.

But I do believe you can consider necessary conditions and look at their absence. For instance, I can safely declare that a rock does not have qualia, because I know it does not have a brain.

Similarly, I may not be able to measure whether LLMs have emotions, but I can observe that the processes that generated LLMs are highly inconsistent with the processes that caused emotions to emerge in the only case where I know they exist. Pair that with the observation that specific human emotions seem like only one option out of infinitely many, and it makes a strong probabilistic argument.

This is sort of why I made the argument that we can only consider necessary conditions, and look for their absence.

But more to your point, LLMs and human brains aren't "two agents that are structurally identical." They aren't even close.  The fact that a hypothetical built-from-scratch human brain might have the same qualia as humans isn't relevant, because that's not what's being discussed.

Also, unless your process was precisely "attempt to copy the human brain," I find it very unlikely that any AI development process would yield something particularly similar to a human brain.

I have explained myself more here: https://www.lesswrong.com/posts/EwKk5xdvxhSn3XHsD/don-t-over-anthropomorphize-ai

OK, I've written a full rebuttal here: https://www.lesswrong.com/posts/EwKk5xdvxhSn3XHsD/don-t-over-anthropomorphize-ai. The key points are at the top.

In relation to your comment specifically, I would say that anger may have that effect on the conversation, but there's nothing that actually incentivizes the system to behave that way - the slightest hint of anger or emotion would be immediate negative reward during RLHF training. Compare to a human: There may actually be some positive reward to anger, but even if there isn't evolution still allowed to get angry because we are mesa-optimizers where that has a positive effect overall.

Therefore, the system learned angry behavior in stage-1 training. But that has no reward structure, and therefore could not associate different texts to different qualia.

Hmmm... I think I still disagree, but I'll need to process what you're saying and try to get more into the heart of my disagreement. I'll respond when I've thought it over.

Thank you for the interesting debate. I hope you did not perceive as me being overly combative.

I see, but I'm still not convinced. Humans behave in anger as a way to forcibly change a situation into one that is favorable to itself. I don't believe that's what the AI was doing, or trying to do.

I feel like there's a thin line I'm trying to walk here, and I'm not doing a very good job. I'm not trying to comment on whether or not the AI has any sort of subjective experience. I'm just saying that even if it did, I do not believe it would bare any resemblance to what we as humans experience as anger.

Ah okay. My apologies for misunderstanding.

Load More