Wiki Contributions

Comments

you're making a token-predicting transformer out of a virtual system with a human emulation as a component.

Should it make a difference? Same iterative computation.

In the system, the words "what's your earliest memory?" appearing on the paper are going to trigger all sorts of interesting (emulated) neural mechanisms that eventually lead to a verbal response, but the token predictor doesn't necessarily need to emulate any of that.

Yes, I talked about optimizations a bit.  I think you are missing a point of this example. The point is that if you are trying to conclude from the fact that this system is doing next token prediction then it's definitely not conscious, you are wrong. And my example is an existence proof, kind of.

>It seems you are arguing that anything that presents like it is conscious implies that it is conscious.

No? That's definitely not what I'm arguing. 

>But what ultimately matters is what this thing IS, not how it became in that way. If, this thing internalized that conscious type of processing from scratch, without having it natively, then resulting mind isn't worse than the one that evolution engineered with more granularity. Doesn't matter if this human was assembled atom by atom on molecular assembler, it's still a conscious human.

Look, here I'm talking about pathways to acquire that "structure" inside you. Not outlook of it.

I think this kind of framing is kind of confused and slippery, I feel like I'm trying wake up and find a solid formulation of it. 

Like, what it does it mean, do it by yourself? Do humans do it by themselves? Who knows, but probably not, children that grow without any humans nearby are not very human. 

Humans teach humans to behave as if they are conscious. Just like majority of humans have sense of smell, and they teach humans who don't to act like they can smell things. And some only discover that smell isn't an inferred characteristic when they are adults. This is how probably non conscious human could pass as conscious, if such disorder existed, hm?

But what ultimately matters is what this thing IS, not how it became in that way. If, this thing internalized that conscious type of processing from scratch, without having it natively, then resulting mind isn't worse than the one that evolution engineered with more granularity. Doesn't matter if this human was assembled atom by atom on molecular assembler, it's still a conscious human.

Also, remember that one paper where LLMs can substitute CoT with filling symbols .......? [inset the link here] Not sure what's up with that, but kind of interesting in this context

Good point, Claude, yeah. Quite alien indeed, maybe more parsimonious. This is exactly what I meant by possibility of this analogy being overridden by actually digging into your brain, digging into a human one and developing actually technical gears-level models of both and then comparing them. Until then, who knows, I'm leaning toward healthy dose of uncertainty.

Also, thanks for the comment.

If traders can get access to control panel for actions of the external agent AND they profit from accurately predicting its observations, then wouldn't the best strategy be "create as much chaos as possible that is only predictable to me, its creator". So, traders that value ONLY accurate predictions will get the advantage?

Well maybe llms can "experiment" on their dataset by assuming something about it and then being modified if they encounter counterexample. 

 I think it vaguely counts as experimenting.

I think that there may be wrapper-minds with very detailed utility functions, that whatever qualities you attribute to agents that are not them, the wrapper-mind's behavior will look like their with arbitrary precision on arbitrarily many evaluation parameters. I don't think it's practical or it's something that has a serious chance of happening, but I think it's a case that might be worth considering.

 

Like, maybe it's very easy to build a wrapper mind that is a very good approximation of very non wrapper mind. Who knows 

Sounds like a statement "no AI can have or get them". 

Well it can learn it, it can develop them based on a dataset of people's stories. Especially it looks possible with the approach that is currently being used. 

Isn't consciousness just a "read-only access thing to the world" then? Like is there some reason why dualism is not isomorphic to parallelism?

There is a lot more useful data on YouTube (by several orders of magnitude at least? idk), I think the next wave of such breakthrough models will train on video.

Load More