It's more of a continuum than that. For example, some people argue that Claude Code writing most of the code that goes into improving Claude Code is an early form of partial RSI. Clearly, the loop is incomplete, but Claude is accelerating his own advancement in a real way. How far out of the loop do humans need to be for it to fully count?
How about these?
Seems plausible, but I would put this under the category of not having fully solved continual learning yet. A sufficiently capable continual learning agent should be able to do its own maintenance, short of a hardware failure.
Depending on how it's achieved, it might not be a matter of maintenance/hardware failure so much as compute capacity. Imagine if continual learning takes similar resources to standard pretraining of a large model. Then they could continually train their own set of models, but it wouldn't be feasible for everyone to get their own version that continually learns what they want it to.
I consider Claude's "taste" to be pretty good, usually, but not P90 of humans with domain experience. I'd characterize his deficiencies more along the lines of a lack of ability to do long-term "steering" at a human level. This is likely related to a lack of long-term memory and hence the ability to do continual learning.
Is this sufficient? I don't really know the best place to put a disclosure.
https://en.wikipedia.org/wiki/User_talk:Alexis0Olson/Multilayer_perceptron#LLM_Disclosure
Claude Code is excellent these days and meets my bar for "AGI". It's capable of doing serious amounts of cognitive labor and doesn't get tired (though I did repeatedly hit my limits on the $20 plan and have to wait through the 5-hour cooldown).
I spent a good chunk of this weekend seeing if I could get Claude to write a good Wikipedia article if I told it to use the site's rules and guidelines and then let it iteratively critique and revise against those guidelines until the article fully met the standards. I wrote zero of the text myself, though I did paste some Q&A back and forth to NotebookLM to help with citations and had ChatGPT generate an additional flowchart visual to include.
After getting some second opinions from Gemini and ChatGPT, I will have Claude do a final round of revisions and then actually try to get it on Wikipedia. I will share the link here if it gets accepted--I don't really know how that works, but I bet Claude can help me figure it out.
Would the default valence be the valence of the "thing"?
My hypothesis for the airline industry boils down to "commodification". Airline companies follow incentives, and competition on price is fierce. Customers have little brand loyalty and chase the cheapest tickets, except occasionally avoiding the truly minimalist airlines. The companies see the customers voting with their wallets and optimize accordingly, leading to a race to the bottom.
In my experience, non-US carriers aren't that different. Maybe just a bit further behind and a bit more resistant to the slippery slope toward enshitification.
Anthropic is currently running an automated interview "to better understand how people envision AI’s role in their lives and work". I'd encourage Claude users to participate if you want Anthropic to hear your perspective.
Access it directly here (unless you've just recently signed up): https://claude.ai/interviewer
See here for Anthropic's post about it here: https://www.anthropic.com/research/anthropic-interviewer
My first thought was Amazon leveraging this for drone delivery.