norn
3
3
norn has not written any posts yet.

norn has not written any posts yet.

It could already be fixed
Is it not already sort of fixed?
We know how well PRS perform in other ancestries, right? It just means that PRS are a little bit less good, not that it doesn't work today.
So what is the "best" way to validate them, in your opinion? Is there anything better than sibling comparisons?
"I tried to explain to him about transfer learning starting to work back in 2015 or so (a phenomenon I regarded as extremely important and which has in fact become so dominant in DL we take it utterly for granted) and he denied it with the usual Hansonian rebuttals; or when he denied that DL could scale at all, he mostly just ignored the early scaling work I linked him to like Hestness et al 2017. Or consider Transformers: a lynchpin of his position was that algorithms have to be heavily tailored to every domain and problem they are applied to, like they were in ML at that time—an AlphaGo in DRL... (read more)