The analogue of P≠NP for randomized polynomial-time algorithms is actually NP≠RP (or equivalently NP⊈BPP), assuming that the intended meaning here is "NP-complete problems cannot be solved in polynomial time, even with randomized algorithms".[1] More information about these classes are available here.
Also, a bit of a nitpick, but it looks like the claim that these widely-believed complexity-theoretic assumptions imply
that there is no way, in polynomial time, to get reliable information about A’s secret circuit C (beyond some statistical regularities) from looking at polynomially many input-output samples of C.
is implicitly claiming that a "learning problem" similar to the following is NP-hard:
There is a hidden function f that takes n bits as input and outputs one bit, and can be described with a polynomial-sized circuit. You are given some number of input-output pairs (xi,f(xi)). Find a polynomial-sized circuit C that is consistent with the given inputs and outputs.
It seems likely that this learning problem is indeed computationally difficult, given that the naive approach requires brute-force search through circuits. However, to actually prove NP-hardness, there needs to be a reduction from 3SAT (say) to that problem. I don't see any straightforward approach to construct such a reduction, even though it is not entirely implausible that one exists (possibly with minor modifications to the problem statement).
I do agree with the conclusion though that the problem being discussed here is very different from what is being captured by statements like P≠NP.
The statement P≠BPP actually means "randomized polynomial-time algorithms cannot be 'derandomized' by replacing them with equivalent deterministic polynomial-time algorithms".
The analogue of P≠NP for randomized polynomial-time algorithms is actually NP≠RP (or equivalently NP⊈BPP), assuming that the intended meaning here is "NP-complete problems cannot be solved in polynomial time, even with randomized algorithms".[1] More information about these classes are available here.
Also, a bit of a nitpick, but it looks like the claim that these widely-believed complexity-theoretic assumptions imply
is implicitly claiming that a "learning problem" similar to the following is NP-hard:
It seems likely that this learning problem is indeed computationally difficult, given that the naive approach requires brute-force search through circuits. However, to actually prove NP-hardness, there needs to be a reduction from 3SAT (say) to that problem. I don't see any straightforward approach to construct such a reduction, even though it is not entirely implausible that one exists (possibly with minor modifications to the problem statement).
I do agree with the conclusion though that the problem being discussed here is very different from what is being captured by statements like P≠NP.
The statement P≠BPP actually means "randomized polynomial-time algorithms cannot be 'derandomized' by replacing them with equivalent deterministic polynomial-time algorithms".