Prediction (influenced by R1-Zero): By EOY, expert-level performance will be reported on outcome prediction for a certain class of AI experiments - those that can be specified concisely in terms of code and data sets that:
We don't know how narrow it is yet. If they did for algebra and number theory something like what they did for geometry in alphageometry (v1), providing it a well-chosen set of operations, then I'll be more inclined to agree.
I don't understand why people aren't freaking out from this news. Waiting for the paper I guess.
What we want is orthogonal though, right? Unless you think that metaphysics is so intractable to reason about logically that the best we can do is go by aesthetics.
Unfortunately the nature of reality belongs to the collection of topics that we can't expect the scientific method alone to guide us on. But perhaps you agree with that, since in your second paragraph you essentially point out that practically all of mathematics belongs to the same collection.
It's not necessary to bring quantum physics into it. Isomorphic consciousness-structures have the same experience (else they wouldn't be isomorphic, since we make their experience part of them). The me up to the point of waking up tomorrow (or the point of my apparent death) is a such a structure (with no canonical language unfortunately; there are infinitely many that suffice), and so it has an elementary class, the structures that elementarily extend it, in particular that extend its experience past tomorrow morning.
+2 for brevity! A couple more explorations of this idea that I didn't see linked yet. They are more verbose, but in a way I appreciate.
If you want to explore this idea further, I'd love you join you.
But "more people are better" ought to be a belief of everyone, whether pro-fertility or not. It's an "other things being equal" statement, of course - more people at no cost or other tradeoff is good. One can believe that and still think that less people would be a good idea in the current situation. But if you don't think more people are good when there's no tradeoff, I don't see what moral view you can have other than nihilism or some form of extreme egoism.
Do all variants of downside focused ethics get dismissed as extreme egoism? Hard to see them as nihilistic.
I suspect clarity and consensus on the meaning of "more people at no cost or other tradeoff" to be difficult. If "more people" means more happy people preoccupied with the welfare of the least fortunate, then sure "at no cost or other tradeoff" should suffice for practically everyone to get behind it. But that seems like quite a biased distribution for a default meaning of "more people."
When capability is performing unusually quickly
Assuming you meant "capability is improving." I expect capability will always feel like it's improving slowly in an AI researcher's own work, though... :-/ I'm sure you're aware that many commenters have suggested this as an explanation for why AI researchers seem less concerned than outsiders.
how about gary marcus as a situational awareness dampening, counter-panic psyop