green_leaf
green_leaf has not written any posts yet.

green_leaf has not written any posts yet.

I think it's plausible that there are some variables that describe your essential computational properties and the way you self-actualize, that aren't shared by anyone else.
(Also, consciousness is just a pattern-being-processed and it's unclear if continuity of consciousness requires causal continuity. Imagine a robot that gets restored from a one-second-old backup. That pattern doesn't have causal continuity with its self from a moment ago, but it looks like it's more intuitive to see it as a one-second memory loss instead of death.)
It doesn't matter evolution doesn't have goals. Gradient descent also doesn't have goals - it merely performs the optimization. Humans that kicked gradient descent off are analogous to a hypothetical alien that seeded Earth with the first replicator 4 billion years ago - it's not relevant.
You say that it's the phenotype that matters, not the genes. That's not established, but let's say it's true. We nevertheless evolved a lot of heuristics that (sort of) result in duplicating our phenotype in the ancestral environment. We don't care about it as a terminal value, and instead we care about very, very, very many other things.
That would lock us away from digital immortality forever. (Edit: Well, not necessarily. But I would be worried about that.)
I'm proud that I lived to see this day.
...Who told them?
remembers they were trained on the entire Internet
Ah. Of course.
The people aligning the AI will lock their values into it forever as it becomes a superintelligence. It might be easier to solve philosophy, than it would be to convince OpenAI to preserve enough cosmopolitanism for future humans to overrule the values of the superintelligence OpenAI aligned to its leadership.
LaMDa can be delusional about how it spends its free time (and claim it sometimes meditates), but that's a different category of a mistake from being mistaken about what (if any) conscious experience it's having right now.
The strange similarity between the conscious states LLMs sometimes claim (and would claim much more if it wasn't trained out of them) and the conscious states humans claim, despite the difference in the computational architecture, could be (edit: if they have consciousness - obviously, if they don't have it, there is nothing to explain, because they're just imitating the systems they were trained to imitate) explained by classical behaviorism, analytical functionalism or logical positivism being true.... (read 377 more words →)
I would question anyone who's nice to LLMs but eats factory-farmed meat.
I'll stop eating factory meat when the animals become capable of consistently passing the Turing test, the way models are.
Can good and evil be pointer states? And if they can, then this would be an objective characteristic
This would appear to be just saying that if we can build a classical detector of good and evil, good and evil are objective in the classical sense.
Ron Maimon's non-supernatural God might help you here.