I operate by Crocker's rules.
I try to not make people regret telling me things. So in particular:
- I expect to be safe to ask if your post would give AI labs dangerous ideas.
- If you worry I'll produce such posts, I'll try to keep your worry from making them more likely even if I disagree. Not thinking there will be easier if you don't spell it out in the initial contact.
The definition of <-> in terms of previous can't be right, that's always 1 regardless of a and b. Also you missed the s in the formula for <->.
(Also it's kinda iffy that weak disjunction is a stronger statement than strong disjunction...)
The story of 1/2, 1/4, ... could be continued immediately to the infinite case, right? (And all the way up the ordinal ladder.)
IE: bounded to
Not quite, this would make a->a false.
Include a definition of <-> in the table, which a quote refers to later?
yeah while I noticed the distinction i usually find it worthwhile to try to steal tools across problem statements that use the same words in a different order, i'll use your data point to downweight that heuristic a little thanks :p
Did knowing that the joint-gaussian thing generalizes to RNNs influence your decision to look at RNNs next?
The second case. Lots of emotions can be found in animals.
We would be really interested in finding a way to mechanistically estimate the average output of random recurrent neural networks (RNNs).
Have you seen Tensor Programs I: Wide Feedforward or Recurrent Neural Networks of Any Architecture are Gaussian Processes?
Do you expect that whether the infinite sum can be approximated is related to whether the architecture averts the vanishing gradient problem?
We've had more than a 10x algorithmic improvement over the last 50 years.
Then maybe spell out that they train it to do a good thing even when told to do a bad thing.
It was previously strong_disjunction(a->b,b->a) instead of weak_conjunction(a->b,b->a) :P.
(I wonder how one decides whether to use weak or strong conjunction there...)