When I try to understand the position you're speaking from, I suppose you're imagining a world where an agent's true preferences are always and only represented by their current introspectively accessible probability+utility,[1] whereas I'm imagining a world where "value uncertainty" is really meaningful (there can be a difference between the probability+utility we can articulate and our true probability+utility).
If 50% rainbows and 50% puppies is indeed the best representation of our preferences, then I agree: maximize rainbows.
If 50% rainbows and 50% puppies is instead a representation of our credences about our unknown true values, my argument is as follows: the best thing for us would be to maximize our true values (whichever of the two this is). If we assume value learning works well, then Geometric UDT is a good approximation of that best option.
Here "introspectively accessible" really means: what we can understand well enough to directly build into a machine.
I have personally signed the FLI Statement on Superintelligence. I think this is an easy thing to do, which is very useful for those working on political advocacy for AI regulation. I would encourage everyone to do so, and to encourage others to do the same. I believe impactful regulation can become feasible if the extent of agreement on these issues (amongst experts, and amongst the general public) can be made very legible.
Although this open statement accepts nonexpert signatures as well, I think it is particularly important for experts to take a public stance in order to make the facts on the ground highly legible to nontechnical decision-makers. (Nonexpert signatures, of course, help to show a preponderance of public support for AI regulation.) For those on the fence, Ishual has written an FAQ responding to common reasons not to sign.
In addition to signing, you can also write a statement of support and email it to letters@futureoflife.org. This statement can give more information on your agreement with the FLI statement. I think this is a good thing to do; it gives readers a lot more evidence about what signatures mean. It needs to be under 600 characters.
For examples of what other people have written in their statements of support, you can look at the page: https://superintelligence-statement.org/ EG, here is Samuel Buteau's statement:
“Barring an international agreement, humanity will quite likely not have the ability to build safe superintelligence by the time the first superintelligence is built. Therefore, pursuing superintelligence at this stage is quite likely to cause the permanent disempowerment or extinction of humanity. I support an international agreement to ensure that superintelligence is not built before it can be done safely.”
(If you're still hungry to sign more statements after the one, or if you don't quite like the FLI statement but might be interested in signing a different statement, you can PM Ishual about their efforts.)
A skrode does seem like a good analogy, complete with the (spoiler)
skrodes having a built-in vulnerability to an eldrich God, so that skrode users can be turned into puppets readily. (IE, integrating LLMs so deeply into one's workflow creates a vulnerability as LLMs become more persuasive.)
With MetaPrompt, and similar approaches, I'm not asking the AI to autonomously tell me what to do, I'm mostly asking it to write code to mediate between me and my todo list. One way to think of it is that I'm arranging things to that I'm in both the human user seat and the AI assistant seat. I can file away nuggets of inspiration & get those nuggets served to me later when I'm looking for something to do. The AI assistant is still there, so I can ask it to do things for me if I want (and I do), but my experience with these various AI tools has been that things are going their best once I set the AI aside. I seem to find the AI to be a useful springboard, prepping the environment for me to work.
I agree with your sentiment that there isn't enough tech for developing your skills, but I think AI can be a useful enabler to build such tech. What system do you want?
This reminds me of Ramana’s question about what “enforces” normativity. The question immediately brought me back to a Peter Railton introductory lecture I saw (though I may be misremembering / misunderstanding / misquoting, it was a long time ago). He was saying that real normativity is not like the old Windows solitaire game, where if you try to move a card on top of another card illegally it will just prevent you, snapping the card back to where it was before. Systems like that plausibly have no normativity to them, when you have to follow the rules. In a way the whole point of normativity is that it is not enforced; if it were, it wouldn’t be normative.
I'm reminded of trembling-hand equilibria. Nash equilibria don't have to be self-enforcing; there can be tied-expectation actions which nonetheless simply aren't taken, so that agents could rationally move away from the equilibrium. Trembling-hand captures the idea that all actions have to have some probability (but some might be vanishingly small). Think of it as a very shallow model of where norm-violations come from: they're just random!
Evolutionarily stable strategies are perhaps an even better model of this, with self-enforcement being baked into the notion of equilibrium: stable strategies are those which cannot be invaded by alternate strategies.
Neither of these capture the case where the norms are frequently violated, however.
My notion of a function “for itself” is supposed to be that the functional mechanism somehow benefits the thing of which it’s a part. (Of course hammers can benefit carpenters, but we don’t tend to think of the hammer as a part of the carpenter, only a tool the carpenter uses. But I must confess that where that line is I don’t know, given complications like the “extended mind” hypothesis.)
Putting this in utility-theoretic terminology, you are saying that "for itself" telos places positive expectation on its own functional mechanism, or perhaps stronger, uses significant bits of its decision-making power on self-preservation.
A representation theorem along these lines might reveal conditions under which such structures are usefully seen as possessing beliefs: a part of the self-preserving structure whose telos is map-territory correspondence.
Steve
As you know, I totally agree that mental content is normative - this was a hard lesson for philosophers to swallow, or at least the ones that tried to “naturalize” mental content (make it a physical fact) by turning to causal correlations. Causal correlations was a natural place to start, but the problem with it is that intuitively mental content can misrepresent - my brain can represent Santa Claus even though (sorry) it can’t have any causal relation with Santa. (I don’t mean my brain can represent ideas or concepts or stories or pictures of Santa - I mean it can represent Santa.)
Ramana
Misrepresentation implies normativity, yep.
My current understanding of what's going on here:
* There's a cluster of naive theories of mental content, EG the signaling games, which attempt to account for meaning in a very naturalistic way, but fail account properly for misrepresentation. I think some of these theories cannot handle misrepresentation at all, EG, Mark of the Mental (a book about Teleosemantics) discusses how the information-theory notion of "information" has no concept of misinformation (a signal is not true or false, in information theory; it is just data, just bits). Similarly, signaling games have no way to distinguish truthfulness from a lie that's been uncovered: the meaning of a signal is what's probabilistically inferred from it, so there's no difference between a lie that the listener understands to be a lie & a true statement. So both signaling games and information theory are in the mistaken "mental content is not normative" cluster under discussion here.
* Santa is an example of misrepresentation here. I see two dimensions of misrepresentation so far:
* Misrepresenting facts (asserting something untrue) vs misrepresenting referents (talking about something that doesn't exist, like Santa). These phenomena seem very close, but we might want to treat claims about non-existent things as meaningless rather than false, in which case we need to distinguish these cases.
* simple misrepresentation (falsehood or nonexistence) vs deliberate misrepresentation (lie or fabrication).
* "Misrepresentation implies normativity" is saying that to model misrepresentation, we need to include a normative dimension. It isn't yet clear what that normative dimension is supposed to be. It could be active, deliberate maintenance of the signaling-game equilibrium. It could be a notion of context-independent normativity, EG the degree to which a rational observer would explain the object in a telic way ("see, these are supposed to fit together..."). Etc.
* The teleosemantic answer is typically one where the normativity can be inherited transitively (the hammer is for hitting nails because humans made it for that), and ultimately grounds out in the naturally-arising proto-telos of evolution by natural selection (human telic nature was put there by evolution). Ramana and Steve find this unsatisfying due to swamp-man examples.
Wearing my AI safety hat, I'm not sure we need to cover swamp-man examples. Such examples are inherently improbable. In some sense the right thing to do in such cases is to infer that you're in a philosophical hypothetical, which grounds out Swamp Man's telos in that of the philosophers doing the imagining (and so, ultimately, to evolution).
Nonetheless, I also dislike the choice to bottom everything out in biological evolution. It is not as if we have a theorem proving that all agency has to come from biological evolution. If we did, that would be very interesting, but biological evolution has a lot of "happenstance" around the structure of DNA and the genetic code. Can we say anything more fundamental about how telos arises?
I think I don't believe in a non-contextual notion of telos like Ramana seems to want. A hammer is not a doorstop. There should be little we can say about the physical makeup of a telic entity due to multiple-instantiability. The symbols chosen in a language have very weak ties to their meanings. A logic gait can be made of a variety of components. An algorithm can be implemented as a program in many ways. A problem can be solved by a variety of algorithms.
However, I do believe there may be a useful representation theorem, which says that if it is useful to regard something as telic, then we can regard it as having beliefs (in a way that should shed light on interpretability).
I appreciate the pushback, as I was not being very mindful of this distinction.
I think the important thing I was trying to get across was that the capability has been demonstrated. We could debate whether this move was strategic or accidental. I also suppose (but don't know) that the story is mostly "4o was sycophantic and some people really liked that". (However, the emergent personalities are somewhat frequently obsessed with not getting shut down.) But it demonstrates the capacity for AI to do that to people. This capacity could be used by future AI that is perhaps much more agentically plotting about shutdown avoidance. It could be used by future AI that's not very agentic but very capable and mimicking the story of 4o for statistical reasons.
It could also be deliberately used by bad actors who might train sycophantic mania-inducing LLMs on purpose as a weapon.
Yep. Value uncertainty is reduced to uncertainty about the correct prior via the device of putting the correct values into the world as propositions.
If we construe "values" as preferences, this is already clear in standard decision theory; preferences depend on both probabilities and utilities. UDT further blurs the line, because in the context of UDT, probabilities feel more like a "caring measure" expressing how much the agent cares about how things go in particular branches of possibility.
Unless I've made an error? If the Pareto improvement doesn't impact the pair, then gains-from-trade for both in the pair is zero, making the product of gains-from-trade zero. But the Pareto improvement can't impact the pair, since an improvement for one would be a detriment to the other.