I added this to the blog post to explain why I don't think your objection goes through:
"[Edit: To respond to an objection that was made on another forum to this blog- advocate for in the context of this section does not necessarily mean the claim is true. If the public thinks the likelihood of X is 1%, and your own assessment, not factoring in the weight of others’ judgments, is 30%, you shouldn’t lie and say you think it’s true. Advocacy just means making a case for it, which doesn’t require lying about your own probability assessment.]"
Here's an analogy. AlphaGo had a network which considered the value of any given board position. It was separate from it's monte carlo tree search network- which explicitly planned the future. However it seems probable that in some sense, in considering the value of the board, AlphaGo was (implicitly) evaluating the future possibilities of the position. Is that the kind of evaluation you're suggesting is happening? "Explicitly" ChatGPT only looks one word ahead, but "implicitly" it is considering those options in light of future directions of development for the text?
Thankyou, I will start to have a read. At first glance, this reminds me of the phenomena of reference magnetism often discussed in philosophy of language. I suspect a good account of natural abstractions will involve the concept of reference magnetism in some way, although teasing out the exact relationship between the concepts might take a while.
I see your point now, but I think this just reflects the current state of our knowledge. We haven't yet grasped that we are implicitly creating- if not minds, then things a-bit-mind-like every time we order artificial intelligence to play a particular character.
When this knowledge becomes widespread, we'll have to confront the reality of what we do every time we hit run. And then we'll be back to the problem of theodicy- the God being the being that presses play- and the question being- is pressing play consistent with their being good people?* If I ask GP...
Certainly, it is possible, but I see little to guarantee our descendants won't create simulations that are like the world we live in now.
I have to disagree here. I strongly suspect that GPT, when it, say, pretends to be a certain character, is running a rough and ready approximate simulation of that character's mental state and its interacting components (various beliefs, desires etc.) I have previously discussed this in an essay, which I will soon be posting.
The monopsony approach to the labor market says they're the rule. A company doesn't actually formally have to be the only buyer of labor power in its region to hold monopsony power.