How does this sound like that?
The idea that we can decide what we want the AI to do, and design it to do that. To build a Hammer rather than an Anything machine that can be a hammer. Or build a General Problem Solver by working out how such a thing would work and programming that. "General problem solvers" in the GOFAI days tended to develop into programming languages specialised towards this or that type of reasoning, leaving the real work still to be done by the programmer. Prolog is the classic example.
The LLM approach has been to say, training to predict data streams is a universal fount of intelligence.
Perhaps there is scope for training specialised LLMs by training on specialised data sets, but I don't know if anyone is doing that. The more limited the resulting tool, the more limited its market, so the incentives are against it, at least until the first disaster unleashed by Claude-n.
I'd love to see folks here give their labels for where they think each of these objects are on the scale of:
A. so definitely not conscious that evidence otherwise is clearly in error, P<0.0000001
B. definitely not conscious in only the sense that you expect to see no evidence otherwise, P ~= 0.00001...0.01
C. unlikely to be conscious, P~=0.01...0.10
D. plausible but unclear, P~=0.10...0.98
E. as plausible as anything besides yourself, P>0.98
F. as definitely conscious as you are now, and any evidence otherwise is clearly in error, P>0.999999
Things to label:
edit: some more from Richard's "elephants" suggestion:
bonus points for also labeling each one with what you think it's conscious of, as I think that's a necessary addendum to discuss the easy problem. but the above is intended to discuss the hard problem and easy problem combined.
First list: 1, 1, 1, .7, 10⁻², 10⁻³, 10⁻⁶, 10⁻⁶, 10⁻⁸, ε, ε, ε, ε, ε.
Second list: .6, .8, .7, .7, .6, .6, .5, ε, ε, ε, ε.
Edit: Thinking about it more, something feels weird here, like these numbers don't track at all "how many of these would make me press the lever on the trolley problem vs 1 human" — for one, killing a sleeping person is about as bad as killing an awake person because like the sleeping person is a temporarily-paused-backup for an awake person. I guess I should be thinking about "the universe has budget for one more hour of (good-)experience just before heat death, but it needs to be all same species, how much do I value each?" or something.
F, E, E, D, C, B, A, A, A, A, A, A, A, A
The As start at "Single-celled organisms, bacteria".
There's rather a jump in your list from sleeping humans to insects. How about, say, elephants? I'd give them a D.
ETA: For the second list: E or D for all the animals (I'm not sure where I'd switch, if at all), A for the rest. I'd go down to C for, say, earthworms, but what I would really mean is more of a D for there being anything like consciousness there, but it would only be a smidgeon.
To summarize my position: (anyone who's going to answer, please answer before reading this):
Please don't click this next spoiler if you haven't replied like Richard did, unless you're sure you're not going to reply!
The only A that seems easily defensible to me is the last one, and I would put C for it. Everything else seems like it could quite plausibly have (potentially millionths or trillionths of human brain level) alien consciousness to me, B or better.
My answers:
first list (you answered): F, F, F, D, E, D, D, D, C, D, D, D, C, C
in fact, I left off one thing: a rock. This is, to my intuition, a special case of the "individual molecules" case, but is the only one where I'd put B rather than C - because how can a brain be conscious if individual molecules can't be conscious? presumably whatever happens to the molecules in the brain that constitutes consciousness, the property still exists at the atomic scale, presumably something to do with either the bare fact of existence (hard problem of consciousness) or the way the molecule is currently interacting (integrated information theory type stuff). so I added a second list to cover this stuff, and here are my answers for it:
second list (added in response to your suggestion): E, E, E, E, E, E, E, C, A, D, D
how can a brain be conscious if individual molecules can't be conscious?
that goes the other direction, claiming things about the whole because of a claim about a part. I'm claiming something about a part must somehow add up to the behavior of a whole.
Splitting hairs. If something true of each part is not true of the whole, then something true of the whole is not true of each part.
No part of a car is a car, yet there is the car. How this can be is not a deep problem.
All B (Yes, I know that variants of eliminativism are not popular here, but I was instructed to answer before reading further)
Discussions of consciousness without defining which meaning you're using will usually cause confusion and arguments, as Tamsin Leake points out.
Consciousness is used to label many aspects of human cognition. Which is complex. So consciousness almost means human-like, but without specifying on which dimensions you're defining the analogy.
That's a great idea, but it's hard to convince people to do it, because all of the things you mention are on the easiest path to making effective AI. That's why humans have those properties; they're useful for effective cognition.
I think there might be a low-cost route. As some motivation, consider how most jobs are definitely not "run around and find what needs doing and go do it" despite humans being so general. We prefer to toolify ourselves somewhat.
I think making the case properly is going to be some real effort though.
There are plenty of systems like this, and people will build more. But they don't do enough. So this will not preclude development of other kinds of systems...
Moreover, as soon as one has a system like this, it is trivial to write a wrapper making multiple calls to the system and having a memory.
And when one has many specialized systems like this, it is not difficult to write a custom system which takes turns calling many of them, having a memory, and having various additional properties.
As soon as one has a capability, it is usually not too difficult to build on top of that capability...
Yes excellent point! Someone said to me that they can add a for-loop to my precious constant-run-time system and it's not constant anymore. This is completely true.
But agents/employees suck in lots of ways compared to tools/services and I can see people just sticking with the tools if they're good tools.
If all the tools are sitting out there for anyone to use and compose as they please then the agentification is bound to happen. It's not always like this though. Eg almost nobody scripts with MS Word; banks make me come in person for a cashiers check.
I think I'll need to rewrite this post with clearer evidence & examples to avoid just going over the same old tool-vs-agent arguments again.
I don't get it. "AI that does what we need AI to do" implies that "we" is a cohesive unit, and also that what we need is extremely limited. Neither are anywhere close to true.
I have nothing against tools, but for many desired outcomes, I don't want tools, I want someone who knows the tools and can do the work.
By "AI that does what we need AI to do" I meant "AI that can specifically do the things people ask of it" (as opposed to an AI that has already mastered every possible skill as preparation).
Perhaps I should have been more clear and modest in my description of the target... Like I think that N LLMs that each know 1 subject/skill is 95% as useful as 1 LLM that knows N subjects. And I think the N-LLMs setup is 20% as dangerous. But I come empty handed in terms of real evidence. Mixture of experts still being SOTA is some weak evidence.
I have nothing against tools, but for many desired outcomes, I don't want tools, I want someone who knows the tools and can do the work.
Same here but I end up going for the tool over the human more often than not because I can see & trust the tool. If I had a magic shapeshifting tool then I think I would have little need for the general agent.
Let's build AI with constant run time.
Let's build AI without memory.
Let's build AI that doesn't want to do anything.
Let's build AI like a hammer rather than a DoAnythingNow with hammer skills.
If you insist that you need DoAnythingNow then set up some extremely streamlined data generation & training processes, so it can do anything when you ask it, rather than learning everything in advance just in case.
Most of the stuff I ask GPT etc for doesn't really need natural language. Natural language is not even a very good interface for eg most code stuff. I would much rather click a "fix bug" button.
The first general AIs learned all of everything in advance, but I don't see any good reason to keep doing that. I see lots of good reasons to eg learn on demand and then forget.
Let's build a tool that feels like a tool and works like a tool. A magic shapeshifting tool is ok. But I don't want a slave.
I don't need my coding assistant to grok the feelings and motivations of every Shakespeare character in depth. A customer service bot will need some basic people skills but maybe skip reading Machiavelli.
Let's build AI that can only do what we need it to do.