PhilipTrettner

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

Decidability of equivalence is broken somewhere between simply typed lambda calculus and System-F. Without recursive types you are strongly normalizing and thus "trivially" decidable. However, just adding recursive types does not break decidability (e.g. see http://www.di.unito.it/~felice/pdf/ictcs.pdf). Similarly, just adding some higher-order functions or parametric polymorphism does also not necessarily break decidability (e.g. see Hindley-Milner). In my (admittedly limited) experience, when making a type system stronger, it is usually some strange, sometimes subtle interaction of these "type system features" that break decidability.

So the first problem raised in the OP is probably tractable for many, quite expressive type systems, even including recursive types.

Though I fully agree with you that the second problem is usually how undecidability proofs start and I'm more skeptical towards that one.

Answer by PhilipTrettner50

I'm not sure if this is exactly the same but it reminds me a lot of recursive types and checking if two such recursive types are equal (see https://en.wikipedia.org/wiki/Recursive_data_type#Equirecursive_types). I looked into that a few years ago and it seems to be decidable with a relatively easy algorithm: http://lucacardelli.name/Papers/SRT.pdf (the paper is a bit longer but it also shows algorithms for subtyping)

To map this onto your expression problem maybe one can just take expression symbols as "type terminals" and use the same algorithm.

So these ‘moral’ intuitions

The sentence seems to be cut off in the middle

The law, as generally formulated, states that entropy increases with time. But how do we know that this is a physical law and not a pattern we notice and successfully extrapolate, in keeping with the main thesis under consideration?

I've recently read The Big Picture and spent some time chasing Quantum Mechanics Wikipedia articles. From what stuck, thermodynamics itself is an effective field theory of the Standard Model of particle physics. It is a macroscopic, consistent approximation of our current best QM theory. Part of this approximation is that entropy is not a physical quantity and the second law of thermodynamics is not a "law". Entropy is a statistical property of a system and has a really high probability to increase in closed systems but doesn't have to. From what I understand, a currently popular hypothesis is that the Big Bang was a really really low entropy state and thus we observe increasing entropy with virtual certainty.

I'm not entirely sure what my point is, but maybe it just support yours, that even such apparently ironclad properties as the second law of thermodynamics are just derived, emergent "laws" that allow us to successfully predict the universe.

I think some clarity for "minimal", "optimization", "hard", and "different conditions" would help.

I'll take your problem "definition" using a distribution D, a reward function R, and some circuit C and and Expectation E over R(x, C(x)).

  1. Do we want the minimal C that maximizes E? Or do we want the minimal C that satisfies E > 0? These are not necessarily equivalent because max(E) might be non-computable while E > 0 not. Simple example would be: R(x, C(x)) is the number of 1s that the Turing Machine with Gödel number C(x) writes before halting (and -1 for diverging TMs) if it uses at most x states, -1 else. E > 0 just means outputting any TM that halts and writes at least one 1. The smallest circuit for that should be easy to find. max(E) computes the Busy Beaver number which is notoriously non-computable.

  2. Should R be computable, semi-decidable, or arbitrary? The R function in (1) is non-computable (has to solve halting problem) but finding E > 0 is computable.

  3. What does different conditions mean? From your problem definition it could mean changing D or changing R (otherwise you couldn't really reuse the same circuit). I'm especially unclear about what a daemon would be in this scenario. "Slightly changing D would result in E < 0" seems to be a candidate. But then "minimal C that satisfies E > 0" is probably a bad candidate: the chances that a benign, minimal C goes from E > 0 to E < 0 when slightly changing D seem to be pretty high. Maybe C should be (locally) continuous(-ish, no idea how to formulate that for boolean circuits) with respect to D---small changes in D should not trigger large changes in E.

  4. I skimmed through your linked paper for obfuscation and those are only obfuscated with respect to some (bounded) complexity class. Classical boolean circuit minimization is in PSPACE and EXPTIME (if I'm not mistaken) and even in your problem statement you can easily (from a computability standpoint) check if two circuits are the same: just check if C(x) == C'(x) for all x (which are finite). It's just not efficient.

  5. My intuition tells me that your problems mainly arise because we want to impose some reasonable complexity constraints somewhere. But I'm not sure where in the problem formulation would be a good place. A lot of optimization problems have well-defined global maxima which are utterly intractable to compute or even to approximate. Probably most of the interesting ones. Even if you can show that minimal circuits are not daemons (however you'll define them), that will not actually help you: given some circuit C if you cannot compute a corresponding minimal circuit you cannot check if C could be a daemon. Even if you were given the minimal circuit C' you cannot check in polynomial time if C == C' (due to SAT I guess).

(First time posting here, hope to contribute)