I am a PhD student in computer science at the University of Waterloo, supervised by Professor Ming Li and advised by Professor Marcus Hutter.
My current research is related to applications of algorithmic probability to sequential decision theory (universal artificial intelligence). Recently I have been trying to start a dialogue between the computational cognitive science and UAI communities. Sometimes I build robots, professionally or otherwise. Another hobby (and a personal favorite of my posts here) is the Sherlockian abduction master list, which is a crowdsourced project seeking to make "Sherlock Holmes" style inference feasible by compiling observational cues. Give it a read and see if you can contribute!
See my personal website colewyeth.com for an overview of my interests and work.
Yeah, this is also just a pretty serious red flag for the OP’s epistemic humility… it amounts to saying “I have this brilliant idea but I am too brilliant to actually execute it, will one of you less smart people do it for me?” This is not something one should claim without a correspondingly stellar track record - otherwise, it strongly indicates that you simply haven’t tested your own ideas against reality.
Contact with reality may lower your confidence that you are one of the smartest younger supergeniuses, a hypothesis that should have around a 1 in a billion prior probability.
Which seems more likely: capabilities happen to increase very quickly around human genius levels of intelligence, or relative capabilities as compared to the rest of humanity by definition increase only when you’re on the frontier of human intelligence?
Einstein found a lot of currently undiscovered physics because he was somewhat smarter/more insightful than anyone else and so he got ahead. This says almost nothing about absolute capabilities of intelligence.
If orcas were actually that smart wouldn’t it be dangerous to talk to them for exactly the same reasons it would be dangerous to talk to a superintelligence?
No, it's possible for LLMs to solve a subset of those problems without being AGI (even conceivable, as the history of AI research shows we often assume tasks are AI complete when they are not e.g. Hofstader with chess, Turing with the Turing test).
I agree that the tests which are still standing are pretty close to AGI; this is not a problem with Thane's list though. He is correctly avoiding the failure mode I just pointed it out.
Unfortunately, this does mean that we may not be able to predict AGI is imminent until the last moment. That is a consequence of the black-box nature of LLMs and our general confusion about intelligence.
So the thing that coalitional agents are robust at is acting approximately like belief/goal agents, and you’re only making a structural claim about agency?
If so, I find your model pretty plausible.
This sounds like how Scott formulated it, but as far as I know none of the actual (semi)formalizations look like this this.
What is this coalitional structure for if not to approximate an EU maximizing agent?
A couple of years later, do you still believe that foom will happen any year now?
How would this model treat mathematicians working on hard open problems? P vs NP might be counter factual just because no one else is smart enough or has the right advantage to solve it. Insofar as central problems of a field have been identified but not solved, I’m not sure your model gives good advice.
Yes, but it's also very easy to convince yourself you have more evidence than you do, e.g. invent a theory that is actually crazy but seems insightful to you (may or may not apply to this case).
I think intelligence is particularly hard to assess in this way because of recursivity.