I think[1] people[2] probably trust individual tweets way more than they should.
Like, just because someone sounds very official and serious, and it's a piece of information that's inline with your worldviews, doesn't mean it's actually true. Or maybe it is true, but missing important context. Or it's saying A causes B when it's more like A and C and D all cause B together, and actually most of the effect is from C but now you're laser focused on A.
Also you should be wary that the tweets you're seeing are optimized for piquing the interests of people like you, not truth.
I'm definitely not the first person to say this, but feels like it's worth it to say it again.
Wait a minute, "agentic" isn't a real word? It's not on dictionary.com or Merriam-Webster or Oxford English Dictionary.
Wait my bad, I didn't except so many people to actually see this.
This is kind of silly, but I had an idea for a post that I thought someone else might say before I have it written out. So I figured I'd post a hash of the thesis here.
It's not just about, idk, getting more street cred for coming up with an idea. This is also what I'm planning to write for my MATs application to Lee Sharkley's stream. So in the case someone else did write it up before me, I would have some proof that I didn't just copy the idea from a post.
(It's also a bit silly because my guess is that the thesis isn't even that original)
Edit: to answer the original question, I will post something before October 6th on this if all goes to plan.
That was the SHA-256 hash for:
What if a bag of heuristics is all there is and a bag of heuristics is all we need? That is, (1) we can decompose each forward pass in current models into a set of heuristics chained together and (2) heauristics chained together is all we need for agi
Here's my full post on the subject
I think people see it and think "oh boy I get to be the fat people in Wall-E"
(My friend on what happens if the general public feels the AGI)