It's interesting to notice that there's nothing with that type on hoogle (Haskell language search engine), so it's not the type of any common utility.
On the other hand, you can still say quite a bit on functions of that type, drawing from type and set theory.
First, let's name a generic function with that type . It's possible to show that k cannot be parametric in both types. If it were, would be valid, which is absurd ( has an element!). It' also possible to show that if k is not parametric in one type, it must have access to at least an element of that type (think about and ).
A simple cardinality argument also shows that k must be many-to-one (that is, non injective): unless B is 1 (the one element type),
There is an interesting operator that uses k, which I call interleave:
Trivially,
It's interesting because partially applying interleave to some k has the type , which is the type of continuations, and I suspect that this is what underlies the common usage of such operators.
usually has one element, so needs not have an element.
That Hoogle doesn't list a result essentially follows from k not being parametric in all types. (Except that it lists unsafeCoerce :: - they'd rather have the type system inconsistent than incomplete...)
stands for what the agent ends up making happen, and may be easier to implement - just like predicting that Kasparov will win a chess match, without knowing how. Interesting should tend to have the property that interleave turns them into a particular kind of . Why would you call it interleave?
Let's say that is the set of available actions and is the set of consequences. is then the set of predictions, where a single prediction associates to every possible action a consequence. is then a choice operator, that selects for each prediction an action to take.
What we have seen so far:
[1] To show this, let be a partition of and let be the equivalence relation uniquely generated by the partition. Then
Re: the third point, I think it's important to differentiate between and , where is the true prediction, that is what actually happens when an agent performs the action .
is simply the outcome the agent is aiming at, while is the outcome the agent eventually gets. So maybe it's more interesting a measure of similarity in , from which you can compare the two.
I found a paper about this exact sort of thing. Escardo and Olivia call that type signature a "selection functional", and the type signature is called a "quantification functional", and there's several interesting things you can do with them, like combining multiple selection functionals into one in a way that looks reminiscent of game theory. (ie, if has type signature , and has type signature , then has type signature .
My first instinct is to translate this post to logic. Obviously A -> B doesn't imply A, because A -> B is true when A is false. So we need to expand the problem: imagine we have A -> B and some additional knowledge K, and together they imply A. Then it seems to me that K alone, without A -> B, would also imply A.
Proof: by definition, (not (A -> B)) = (A and not B). Therefore (not (A -> B)) -> A. Therefore (K and not (A -> B)) -> A. But we also have (K and (A -> B)) -> A by assumption. Therefore K -> A by case analysis.
So the only thing we can say about K is that it must imply A, and any such K would suffice. The proof only works in classical logic though. What can we say about K if the logic is intuitionistic?
We can see the difference by setting K = ((A -> B) -> A). Then in intuitionistic logic, K alone doesn't imply A because Peirce's law doesn't hold, but (K and (A -> B)) implies A by modus ponens. Moreover, it's obvious that any other suitable K must imply this one. That wraps up the intuitionistic case: K must imply (A -> B) -> A, any such K would suffice, and there's no shorter answer.
Can we exhibit specific A and B for which (A -> B) -> A holds intuitionistically, but A doesn't? Yes we can: this stackexchange answer says A = (((P -> Q) -> P) -> P) and B = (P -> Q) work. Of course this A still holds classically due to Peirce's law, but that's unavoidable.
I wonder if there are any plausible examples of this type where the constraints don't look like ordering on B and search on A.
To be clear about what I mean about those constraints, here's an example. One way you might be able to implement this function is if you can enumerate all the values of A and then pick the maximum B according to some ordering. If you can't enumerate A, you might have some strategy for searching through it.
But that's not the only feasible strategy. For example, if you can order B, take two elements of B to C and order C, you might do something like taking the element of B that, together with the value less than it, takes you to the greatest C.
My question is whether these weirder functions have any interest
I wonder if there are any plausible examples of this type where the constraints don't look like ordering on B and search on A.
Yes, as I shown in my post, such operators must know at least an element of one of the domains of the function. If it knows at least an element of A, a constant function on that element has the right type. Unfortunately, it's not much interesting.
"Bringing in the agency (Ai→Ui)→Ai of both players leads to cycle. This cycle does not make sense unless the agency arrows are lossy in some way, so as to not be able to create a contradiction. "
I'm definitely missing something here - and may be thinking of things incorrectly here. Isn't a contradiction inherent in a cycle behavior? I'm thinking about things like voting cycles events where preferences are multi-peaked resulting in shifting majorities.
Is the "lossy" point just saying in such a cycle we have rules about pairing the alternatives that are then voted for and once one alternative has lost then it's out of the set for future votes?
Am I thinking of this the right way (even if putting it in a bit of a different context)?
I think the gradient descent bit is spot on. That also looks like the flavour of natural selection, with non infinitesimal (but really small) deltas. Natural selection consumes a proof that a particular (mutation) produces (fitness) to generate/propagate/multiply .
I recently did some thinking about this and found an equivalence proof under certain conditions for the natural selection case and the gradient descent case.
In general, I think the type signature here can indeed be soft or fuzzy or lossy and you still get consequentialism, and the 'better' the fidelity, the 'better' the consequentialism.
This post has also inspired some further thinking and conversations and refinement about the type of agency/consequentialism which I'm hoping to write up soon. A succinct intuitionistic-logic-flavoured summary is something like but there's obviously more to it than that.
This post is about following type signature, which I call the type of agency: (A→B)→A. You can also think of it as consequentialism or doing things on purpose. This post will be a rant with a bunch of random thoughts related to this type signature, and it will likely not make sense. It will also be sloppy and will have type errors, but I think it is worth posting anyway.
First, interpret these arrows as causal arrows, but you can also think of them as function arrows. This is saying that the causal relationship from A to B causes A to happen. Think of A as an action and B as the goal. The reason that A happens is the fact that it has B as a consequence. There are not normally exponential objects like this in Bayes' nets, but I think you can modify so that it makes sense. (I'm not sure that this works, but you have a Cartesian closed category with nodes that are the nodes in your Bayes net, and add small number of morphisms from product nodes to individual nodes, corresponding to the functions in the Bayes' net. The acyclicness of the Bayes' net roughly corresponds to this category being thin. Then you can consider having other types of morphisms that can keep the category thin.)
If you have a game between two agents with action nodes A1 and A2, with utilities U1 and U2. The game implements a pair of functions A1×A2→U1 and A1×A2→U2. We can Curry these functions and think of them as A2→(A1→U1) and A1→(A2→U2). Bringing in the agency (Ai→Ui)→Ai of both players leads to cycle. This cycle does not make sense unless the agency arrows are lossy in some way, so as to not be able to create a contradiction.
Fortunately, there is another reason to think that these agency arrows will be lossy. Lawvere's Fixed Point Theorem says that in a Cartesian closed category, unless B has the fixed point property, you cannot have a surjective function A→(A→B), in Set this is saying that if B has more than one element, you cannot have an injection (A→B)→A. i.e. The agency arrows have to be lossy.
Also, notice that Argmax, takes in a function f from some set A to R, and returns an element of the domain, A, so Argmax has type (A→R)→A.
This one is a bit more of a stretch, but if you look at gradient descent, you have some space X, you have a function f:X→R. The gradient can be thought of as a function from infinitesimal changes in X to infinitesimal changes in f(X). Gradient descent works by converting this gradient into a change in X. i.e. Gradient descent looks kind of like (∂X→∂R)→∂X.