She's already abused her power at least once to ban someone for expressing opinions she doesn't like.
I'm dubious that that constitutes abusing her power; AdvancedAtheist was highly and consistently downvoted for a long period of time before being banned.
Just say you are a dictator and ban at a whim
There is a slight problem in that LW is not Nancy's personal blog to be shaped by her whims.
As Romeo noted, Nancy was appointed roughly by popular acclaim (more like, a small number of highly dedicated and respected users appointing her, and no one objecting). I think it's reasonable in general to give mods a lot of discretionary power, and trust other veteran users to step in if things take a turn for the worse.
My main update from this discussion has been a strong positive update about Gleb Tsipursky's character. I've been generally impressed by his ability to stay positive even in the face of criticism, and to continue seeking feedback for improving his approaches.
You're creepy and artificial. Ella is creepy and artificial. This post is creepy and artificial. The About Us page of Intentional Insights is -very- creepy and artificial. And what makes this all bizarre is that the creepy and artificial is recursive - there's something creepy and artificial about the way you're creepy and artificial, in that it is so transparent and obvious that it cannot possibly be unintentionally transparent and obvious. The way you keep selling yourself, selling your company (which itself is selling you), selling merchandise selling your company selling yourself...
Well, knock it off. I don't know if you're a spider in a human suit, or a human in a spider-in-a-human-suit suit, or a spider in a human-in-a-spider-in-a-human-suit-suit suit, but at a certain level it stops mattering. If you're a naive innocent playing at Dark Arts, you're reading as a narcissistic con artist, and not even a terribly good one. If you're a sociopath playing as a naive innocent playing at Dark Arts in order to do something more elaborate that probably only vaguely involves Less Wrong, well, that's just ridiculous, so quit that. And if you're actually a con artist, you're terrible at whatever con you're trying to execute here and should go do something with social media, which actually looks like your skill set.
I can understand your dislike of Gleb's approach and even see many of your concerns as justified; do you really think your actions in this thread are helping you get what you want though? They certainly won't make Gleb himself listen to you, and they also don't make you sympathetic to onlookers. To the extent that you have issues with Gleb's actions, it seems like pointing them out in a non-abusive way for others to judge would be far more effective.
delicate symmetry-breaking which can only come from either the training procedure or noise in the data, rather than the model itself
I'm still not convinced. The pointwise nonlinearities introduce a preferred basis, and cause the individual hidden units to be much more meaningful than linear combinations thereof.
Yeah; I discussed this with some others and came to the same conclusion. I do still think that one should explain why the preferred basis ends up being as meaningful as it does, but agree that this is a much more minor objection.
Do you have a study in mind that shows this?
Comparing different recognition systems is complex, and it's important to compare apples to apples. CNNs are comparable only to rapid feedforward recognition in the visual system which can be measured with rapid serial presentation experiments. In an untimed test the human brain can use other modules, memory fetches, multi-step logical inferences, etc (all of which are now making their way into ANN systems, but still).
The RSP setup ensures that the brain can only use a single feedforward pass from V1 to PFC, without using more complex feedback and recurrent loops. It forces the brain to use a network configuration similar to what current CNN used - CNNs descend from models of that pathway, after all.
In those test CNNs from 2013 rivaled primate IT cortex representations 1, and 2015 CNNs are even better.
That paper uses a special categorization task with monkeys, but the results generalize to humans as well. There are certainly some mistakes that a CNN will make which a human would not make even with the 150ms time constraint, but the CNNs make less mistakes for the more complex tasks with lots of categories, whereas humans presumably still have lower error for basic recognition tasks (but to some extent that is because researchers haven't focused much on getting to > 99.9% accuracy on simpler recognition tasks).
Cool, thanks for the paper, interesting read!
I don't see that (4) should be necessary; I may be misunderstanding it.
If you apply a change of basis to the inputs to a non-linearity, then I'm sure it will destroy performance. If you apply a change of basis to the outputs, then those outputs will cease to look meaningful, but it won't stop the algorithm from working well. But just because the behavior of the algorithm is robust to applying a particular linear scrambling doesn't mean that the representation is not natural, or that all of the scrambled representations must be just as natural as the one we started with.
Yeah I should be a bit more careful on number 4. The point is that many papers which argue that a given NN is learning "natural" representations do so by looking at what an individual hidden unit responds to (as opposed to looking at the space spanned by the hidden layer as a whole). Any such argument seems dubious to me without further support, since it relies on a sort of delicate symmetry-breaking which can only come from either the training procedure or noise in the data, rather than the model itself. But I agree that if such an argument was accompanied by justification of why the training procedure or data noise or some other factor led to the symmetry being broken in a natural way, then I would potentially be happy.
I think I've yet to see a paper that convincingly supports the claim that neural nets are learning natural representations of the world
Taboo natural representations?
Without defining a natural representation (since I don't know how to), here's 4 properties that I think a representation should satisfy before it's called natural (I also give these in my response to Vika):
(1) Good performance on different data sets in the same domain.
(2) Good transference to novel domains.
(3) Robustness to visually imperceptible perturbations to the input image.
(4) "Canonicality": replacing the learned features with a random invertible linear transformation of the learned features should degrade performance.
Here's an example of recurrent neural nets learning intuitive / interpretable representations of some basic aspects of text, like keeping track of quotes and brackets: http://arxiv.org/abs/1506.02078
I know there are many papers that show that neural nets learn features that can in some regimes be given nice interpretations. However in all cases of which I am aware where these representations have been thoroughly analyzed, they seem to fail obvious tests of naturality, which would include things like:
(1) Good performance on different data sets in the same domain.
(2) Good transference to novel domains.
(3) Robustness to visually imperceptible perturbations to the input image.
Moreover, ANNs almost fundamentally cannot learn natural representations because they fail what I would call the "canonicality" test:
(4) Replacing the learned features with a random invertible linear transformation of the learned features should degrade performance.
Note that the reason for (4) is that if you want to interpret an individual hidden unit in an ANN as being meaningful, then it can't be the case that a random linear combination of lots of units is equally meaningful (since a random linear combination of e.g. cats and dogs and 100 other things is not going to have much meaning).
That was a bit long-winded, but my question is whether the linked paper or any other papers provide representations that you think don't fail any of (1)-(4).
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
That wasn't the reason she gave for banning him.
I'm 85% sure that you're VoiceOfRa.