I feel that this is appropriate for those hung up on the concept of consciousness, but there are some technical confusions about the topic that I find frustrating. It does still seem to conflate consciousness with agency in a way that I'm not sure is correct. But, if one is thinking of agency when saying consciousness, it's not the worst resource. I'd argue that consciousness is a much easier bar to pass than agency, and agency is the concern - but given that most people think of them as the same and even view their own consciousness through the lens of how it contributes to their own agency, perhaps it's not so bad. But one can be awake, coherent, selfhood-having, and yet not have much in the way of intention, and that is the thing I wish it had focused on more. It's certainly on the right side of things in terms of not just seeing ai as a technology.
I’d argue that consciousness is a much easier bar to pass than agency,
Huh?We know how to build software agents, and we have them. Whereas we can hardly even define consciousness.
coming back to this: I claim that when we become able to unify the attempted definitions, it will become clear that consciousness is a common, easily-achieved-by-accident information dynamics phenomenon, but that agency, while achievable in software, is not easily achieved by accident.
some definition attempts that I don't feel hold up to scrutiny right now, but which appear to me to be low scientific quality sketches which resemble what I think the question will resolve to later:
coming back to this: I claim that when we become able to unify the attempted definitions,
Will we? I don't see why that is a given. A word can mean multiple things that just aren't the same, and aren't reliable. Calling a bunch of differerent things by the same word is a map feature (or rather, bug).
it will become clear that consciousness is a common, easily-achieved-by-accident
How do you test that you have achieved it by accident? One of the things consciousness means is qualia, and we don't have qualiometers.
some definition attempts that I don’t feel hold up to scrutiny right now,
the one zahima linked is top of the list
The definition Zahima offers:
The meaning of consciousness discussed here is subjective internal experience, or the quality of there being “something that it is like to be something”.
...is quite normal. I suspect you are obecting to the *explanation*.
this one which has terrible epistemic quality and argues some things that are physically nonsensical, but which nevertheless is a reasonable high temperature attempt to define it in full generality imo
Again a theory , not a defintiion.
“it boils down to, individual parts of the universe exist, consciousness is when they have mutual information” or so. you’re a big mind knowing about itself, but any system having predictive power on another system for justified reasons is real consciousness.
Ditto.
I recently came across this engaging video essay by exurb1a and felt it would be of interest to this community. Contrary to its title, it avoids the common pitfall of asserting that only conscious AI can be dangerous, and it effectively addresses several of the main topics often discussed on this forum.
If you've ever wanted a lighthearted resource to share with friends less familiar with these concepts, this might be a good starting point.