Anders Lindström

Wikitag Contributions

Comments

Sorted by

Yes, a single strong, simple argument or piece of evidence that could refute the whole LLM approach would be more effective but as of now no one have the answer if the LLM approach will lead to AGI or not. However, I think you've in a meaningful way addressed interesting and important details that are often overlooked in broad hype statements that are repeated and thrown around like universal facts and evidence for "AGI within the next 3-5 years".

This might seem like a ton of annoying nitpicking.

 

You don't need to apologize for having a less optimistic view of current AI development. I've never heard anyone driving the hype train apologize for their opinions.

I know many of you dream of having an IQ of 300 to become the star researcher and avoid being replaced by AI next year. But have you ever considered whether nature has actually optimized humans for staring at equations on a screen? If most people don’t excel at this, does that really indicate a flaw that needs fixing?

Moreover, how do you know that a higher IQ would lead to a better life—for the individual or for society as a whole? Some of the highest-IQ individuals today are developing technologies that even they acknowledge carry Russian-roulette odds of wiping out humanity—yet they keep working on them. Should we really be striving for more high-IQ people, or is there something else we should prioritize?

I would like to ask for a favor—a favor for humanity. As the AI rivalry between the US and China has reached new heights in recent days, I urge all parties to prioritize alignment over advancement. Please. We, humanity, are counting on your good judgment.

Perhaps.

https://www.politico.eu/article/us-elon-musk-troll-donald-trump-500b-ai-plan/

But Musk responded skeptically to an OpenAI press release that announced funding for the initiative, including an initial investment of $100 billion.

“They don’t actually have the money,” Musk jabbed.

In a follow-up post on his platform X, the social media mogul added, “SoftBank has well under $10B secured. I have that on good authority.”

Communicate the plan with the general public: Morally speaking, I think companies should share their plans in quite a lot of detail with the public.

Yes, I think so too, but it will never happened. AGI/ASI is too valuable to be discussed publicly. I have never ever been given the opportunity to have a say in any other big corporate decision regarding the development of weapons and for sure I will not have it this time either. 

"They" will build the things "they" believe are necessary to protect "the American or Chinese way of life", and "they" will not ask you for permission or your opinion.

  • Money will be able to buy results in the real world better than ever.
  • People's labour gives them less leverage than ever before.
  • Achieving outlier success through your labour in most or all areas is now impossible.
  • There was no transformative leveling of capital, either within or between countries.

If this is the "default" outcome there WILL be blood. The rational thing to do in this case it to get a proper prepper bunker and see whats left when the dust have settled. 

Excellent points. My experience is that people in general do not like to think that the things they are doing are could be done in other ways or not at all, because that means that they have to rethink their own role and purpose. 

When you predict (either personally or publicly) future dates of AI milestones do you:
 

Assume some version of Moore's "law" e.g. exponential growth.

Or

Assume some near term computing gains e.g. quantum computing, doubly exponential growth.

Load More