Without a clear definition and measure of "consciousness", it's almost impossible to reason about tradeoffs and utility. But that won't stop us!
This is the first time I've come across the point
But I'm not sure that the "something" that it's useful for getting done is actually what other conscious entities want.
One hypothesis is that consciousness evolved for the purpose of deception -- Robin Hanson's "The Elephant in the Brain" is a decent read on this, although it does not address the Hard Problem of Consciousness.
If that's the case, we might circumvent its usefulness by having the right goals, or strong enough detection and norm-punishing behaviors. If we build factories that are closely monitored where faulty machines are destroyed or repaired, and our goal is output instead of survival of individual machines, then the machines being deceptive will not help with that goal.
If somehow the easy and hard versions of consciousness separate (i.e., things which don't functionally look like the conscious part of human brains end up "having experience" or "having moral weight"), then this might not solve the problem even under the deception hypothesis.
This question is also very important in the scenario where good, reflective, humans doesn't control the future. If a rogue AI takes control over the future and the best way to do work involves consciousness, we will have a universe with a lot of consciousness in it, but with no concern for their suffering.
We don’t know which systems are conscious
Related: https://www.lesswrong.com/posts/wqDRRx9RqwKLzWt7R/nonperson-predicates
Here, EY discusses the concept of a non-person predicate, which evaluates things and tells you if they are not-people. It it says it's a person, it might be wrong, but it's never wrong if it says it's not a person. That way, if you get "not a person!", you can be certain that you do not have to worry about its subjective experience (and therefore, for many moral theories, its moral patienthood).
This doesn't affect the main post's point that once we know which systems are conscious, we may find ourselves in a situation where all our best candidates for work-doing-systems are also consciousness-having-systems
Also related: https://www.lesswrong.com/posts/mELQFMi9egPn5EAjK/my-attempt-to-explain-looking-insight-meditation-and
Here Kaj Sotala suggests that an aspect of our qualitative experience (suffering) can be removed without much change in our behaviours. (Though I worry that makes our experience of suffering surprising)
Matter can experience things. For instance, when it is a person. Matter can also do work, and thereby provide value to the matter that can experience things. For instance, when it is a machine. Or also, when it is a person.
An important question for what the future looks like, is whether it is more efficient to carry out these functions separately or together.
If separately, then perhaps it is best that we end up with a huge pile of unconscious machinery, doing all the work to support and please a separate collection of matter specializing in being pleased.
If together, then we probably end up with the value being had by the entities doing the work.
I think we see people assuming that it is more efficient to separate the activities of producing and consuming value. For instance, that the entities whose experiences matter in the future will ideally live a life of leisure. And that lab grown meat is a better goal than humane farming.
Which seems plausible. It is at least in line with the general observation that more efficient systems seem to be specialized.
However I think this isn’t obvious. Some reasons we might expect working and benefiting from work to be done by overlapping systems: