Oh, I mean "required" as in to get a degree in a certain subject you need to write a thesis as your rite of passage.
Yes, you are right. Adept or die. AI can be a wonderful tool for learning but as it is used right now, where everyone have to say that they don´t use it, it beyond silly. I guess there will be some kind of reckoning soon.
By the time you have an AI that can monitor and figure out what you are actually doing (or trying to do) on your screen, you do not need the person. Ain´t worth the hassle to install cameras that will be useless in 12 months time...
Cool project, I really like the clean and minimalist design AND functionality!
Two thoughts:
5-level ratings. Don't really like 5-level rating systems, cause its so easy to be a "lazy" reviewer and go for a three. I prefer 4 or 6-level rating systems where there is no "lazy" middle ground.
Preferred winner. Most of the time when I watch sports of any sort, I have a preferred winner. Perhaps adding that data point to each game could be interesting to see in the aggregate how that affects the rating you give a game.
But how do we know that ANY data is safe for AI consumption? What if the scientific theories that we feed the AI models contain fundamental flaws such that when an AI runs off and do their own experiments in say physics or germline editing based on those theories, it triggers a global disaster?
I guess the best analogy for this dilemma is "The Chinese farmer" (The old man lost his horse), I think we simple do not know which data will be good or bad in the long run.
Yes, a single strong, simple argument or piece of evidence that could refute the whole LLM approach would be more effective but as of now no one have the answer if the LLM approach will lead to AGI or not. However, I think you've in a meaningful way addressed interesting and important details that are often overlooked in broad hype statements that are repeated and thrown around like universal facts and evidence for "AGI within the next 3-5 years".
This might seem like a ton of annoying nitpicking.
You don't need to apologize for having a less optimistic view of current AI development. I've never heard anyone driving the hype train apologize for their opinions.
I know many of you dream of having an IQ of 300 to become the star researcher and avoid being replaced by AI next year. But have you ever considered whether nature has actually optimized humans for staring at equations on a screen? If most people don’t excel at this, does that really indicate a flaw that needs fixing?
Moreover, how do you know that a higher IQ would lead to a better life—for the individual or for society as a whole? Some of the highest-IQ individuals today are developing technologies that even they acknowledge carry Russian-roulette odds of wiping out humanity—yet they keep working on them. Should we really be striving for more high-IQ people, or is there something else we should prioritize?
I unfortunately believe that such policy changes are futile. I agree that right now its possible (not 100% by any means) to detect a sh*tpost, at least within a domain a know fairly well. Remember that we are just at the beginning of Q2 2025. Where are we with this Q2 2026 or Q2 2027?
There is no other defense for the oncoming AI forum slaughter than that people find it more valuable to express their own true opinions and ideas then to copy paste or let an agent talk for them.
No policy change is needed, a mindset change is.