I'm also excited because, while I think I have most of the individual subskills, I haven't personally been nearly as good at listening to wisdom as I'd like, and feel traction on trying harder.
Great post! I personally have a tendency to disregard wisdom because it feels "too easy", that if I am given some advice and it works I think it was just luck or correlation, then I have to go and try "the other way (my way...)" and get a punch in the face from the universe and then be like "ohhh, so that why I should have stuck to the advice".
Now when I think about it, it might also be because of intellectual arrogance, that I think I am smarter than the advice or the person that gives the advice.
But I have lately started to think a lot about way we think that successful outcomes require overreaching and burnout. Why do we have to fight so hard for everything and feel kind of guilty if it came to us without much effort? So maybe my failure to heed advices of wisdom is based in a need to achieve (overdo, modify, add, reduce, optimize etc.) rather than to just be.
My experience says otherwise, but it might have happen to stumble on some militant foodies.
"Conservative evangelical Christians spend an unbelievable amount of time focused on God: Church services and small groups, teaching their kids, praying alone and with friends. When I was a Christian I prayed 10s of times a day, asking God for wisdom or to help the person I was talking to. If a zealous Christian of any stripe is comfortable around me they talk about God all the time."
Isn't this true for ALL true believers regardless of conviction? I could easily replace 'Conservative evangelical Christians and God' with 'Foodies and food', 'Teenage girls and influencers', 'Rationalists and logic', 'Gym bros and grams of protein per kg/lb of body mass'. There seems to be something inherent in the will to preach to others out of good will, that we want to share something that we believe would benefit others. The road to hell isn't paved with good intentions for nothing...
It would for national security reasons be strange to assume that there already now is no coordination among the US firms. And... are we really sure that China is behind in the AGI race?
And one wonder how much the bottleneck is TSMC (the western "AI-block" have really put a lot of their eggs in one basket...) and how much is customer preference towards Nvidia chips. The chips wars 2025 will be very interesting to follow. Thanks for a good chip summary!
What about AMD? I saw that on that latest supercomputer TOP500 list that systems that uses bout AMD CPU and GPU now holds the places 1,2,5,8 and 10, among the top 10 systems. Yes, workloads on these computers are a bit different from a pure GPU training cluster, but still.
https://top500.org/lists/top500/2024/11/
Yes, the soon-to-be-here "human level" AGI people talk about is for all intent and purposes ASI. Show me one person who is at the highest expert level on thousands of subjects and that have the content of all human knowledge memorized and can draw the most complex inferences on that knowledge across multiple domains in seconds.
Its interesting that you mention hallucination as a bug/artefact, I think that hallucinations is what we humans do all day and everyday when we are trying to solve a new problem. We think up a solution we really believe is correct and then we try it and more often than not realize that we had it all wrong and we try again and again and again. I think AI's will never be free of this, I just think it will be part of their creative process just as it is in ours. It took Albert Einstein a decade or so to figure out relativity theory, I wonder how many time he "hallucinated" a solution that turned out to be wrong during those years. The important part is that he could self correct and dive deeper and deeper into the problem and finally solve it. I firmly believe that AI will very soon be very good at self correcting, and if you then give your "remote worker" a day or 10 to think through a really hard problem, not even the sky will be the limit...
When you predict (either personally or publicly) future dates of AI milestones do you:
Assume some version of Moore's "law" e.g. exponential growth.
Or
Assume some near term computing gains e.g. quantum computing, doubly exponential growth.