martinkunev

Wiki Contributions

Comments

Sorted by

I've read the sequences. I'm not sure if I'm missing something or the issues I raised are just deeper. I'll probably ignore this topic until I have more time to dedicate.

the XOR of two boolean elements is straightforward to write down as a single-layer MLP

Isn't this exactly what Minsky showed to be impossible? You need an additional hidden layer.

I don't find any of this convincing at all. If anything, I'm confused.

What would a mapping look like? If it's not physically present then we recursively get the same issue - where is the mapping for the mapping?

Where is the mapping between the concepts we experience as qualia and the physical world? Does a brain do anything at all?

A function in this context is a computational abstraction. I would say this is in the map.

they come up with different predictions of the experience you’re having

The way we figure out which one is "correct" is by comparing their predictions to what the subject says. In other words, one of those predictions is consistent with the subject's brain's output and this causes everbody to consider it as the "true" prediction.

There could be countless other conscious experiences in the head, but they are not grounded by the appropriate input and output (they don't interact with the world in a reasonable way).

I think it only seems that consciousness is a natural kind and this is because there is one computation that interacts with the world in the appropriate way and manifests itself in it. The other computations are, in a sense, disconnected.

I don't see why consciousness has to be objective other than this being our intuition (which is notorious for being wrong out of hunter-gatherer contexts). Searle's wall is a strong argument that consciousness is as subjective as computation.

I would have appreciated an intuitive explanation of the paradox something which I got from the comments.

"at the very beginning of the reinforcement learning stage... it’s very unlikely to be deceptively aligned"

I think this is quite a strong claim (hence, I linked that article indicating that for sufficiently capable models, RL may not be required to get situational awareness).

Nothing in the optimization process forces the AI to map the string "shutdown" contained in questions to the ontological concept of a switch turning off the AI. The simplest generalization from RL on questions containing the string "shutdown" is (arguably) for the agent to learn certain behavior for question answering - e.g. the AI learns that saying certain things outloud is undesirable (instead of learning that caring about the turn off switch is undesirable). People would likely disagree on what counts as manipulating shutdown, which shows that the concept of manipulating shutdown is quite complicated so I wouldn't expect generalizing to it to be the default.

preference for X over Y
...
"A disposition to represent X as more rewarding than Y (in the reinforcement learning sense of ‘reward’)"

The talk about "giving reward to the agent" also made me think you may be making the assumption of reward being the optimization target. That being said, as far as I can tell no part of the proposal depends on the assumption.


In any case, I've been thinking about corrigibility for a while and I find this post helpful.

They were teaching us how to make handwriting beautiful and we had to exercice. The teacher would look at the notebooks and say stuff like "you see this letter? It's tilted in the wrong direction. Write it again!".

This was a compulsory part of the curriculum.

Not exactly a response but some things from my experience. In elementary school in the late 90s we studied caligraphy. In high school (mid 2000s) we studied DOS.

Load More