Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
norn10

"I tried to explain to him about transfer learning starting to work back in 2015 or so (a phenomenon I regarded as extremely important and which has in fact become so dominant in DL we take it utterly for granted) and he denied it with the usual Hansonian rebuttals; or when he denied that DL could scale at all, he mostly just ignored the early scaling work I linked him to like Hestness et al 2017. Or consider Transformers: a lynchpin of his position was that algorithms have to be heavily tailored to every domain and problem they are applied to, like they were in ML at that time—an AlphaGo in DRL had nothing to do with a tree search chess engine, much less stuff like Markov random fields in NLP, and this was just a fact of nature, and Yudkowsky’s fanciful vaporing about ‘general algorithms’ or ‘general intelligence’ so much wishful thinking. Then Transformers+scaling hit and here we are…"

I can't find where you've had this exchange with him - can you find it?

If his embarrassing mistakes (and refusal to own up to them) is documented and demonstrable, why not just blam them onto his blog and twitter?

norn20

It could already be fixed

Is it not already sort of fixed?

We know how well PRS perform in other ancestries, right? It just means that PRS are a little bit less good, not that it doesn't work today.

norn30

So what is the "best" way to validate them, in your opinion? Is there anything better than sibling comparisons?