I understood, very much secondhand, that current LLMs are still using a separately trained part of the model's input space for images. I'm very unsure how the model weights are integrating the different types of thinking, but am by default skeptical that it integrates cleanly into other parts of reasoning.
That said, I'm also skeptical that this is fundamentally a had part of the problem, as simulation and generated data seems like a very tractable route to improving this, if/once model developers see it as a critical bottleneck for tens of billions of dollars in revenue.
That seems correct, but I don't think any of those aren't useful to investigate with AI, despite the relatively higher bar.
...Thus, to explain the Fermi Paradox, we should posit increased odds that the Great Filter is in front of us. (However, my prior for the Great Filter being ahead of humanity is pretty low, we're too close to AI and the stars—keep in mind that even a paperclipper has not been Filtered, a Great Filter prevents any intelligence from escaping Earth.)
Or that the filter is far behind us - specifically, Eukaryotes only evolved once. And in the chain-model by Sandberg et al, pre-intelligence filters are the vast majority of the probability mass, so it seems to me that eliminating intelligence as a filter shifts the remaining probability mass for a filter backwards in time in expectation.
That being said, this strategy relies on approaches that are fruitful for us and fruitful to AI assisted, accelerated, or done research to be the same approaches. (again reasonable, but not certain).
What is being excluded by this qualification?
I strongly agree, and as I've argued before, long timelines to ASI are possible even if we have proto-AGI soon, and aligning AGI doesn't necessarily help solve ASI risks. It seems like people are being myopic, assuming their modal outcome is effectively certain, and/or not clearly holding multiple hypotheses about trajectories in their minds, so they are undervaluing conditionally high value research directions.
Maybe we could look a 4-star generals, of which there are under 40 total in the US? Not quite as selective, but a more similar process. (Or perhaps around as selective given the number of US Catholics, vs. US citizens.)
You could compare to other strongly meritocratic organizations (US Senate? Fortune 500 C-level employees?) to see whether the church is very different.
The boring sense that is enough to say that it increases in intelligence, which was the entire point.
"infer a virtue-ethical utility function from a virtue-ethical policy"
The assumption of virtue ethics isn't that virtue is unknown and must be discovered - it's that it's known and must be pursued. If the virtuous action, as you posit, is to consume ice cream, intelligence would allow an agent to acquire more ice cream, eat more over time by not making themselves sick, etc.
But any such decision algorithm, for a virtue ethicist, is routing through continued re-evaluation of whether the acts are virtuous, in the current context, not embracing some farcical LDT version of needing to pursue ice cream at all costs. There is an implicit utility function which values intelligence, but it's not then inferring back what virtue is, as you seem to claim. Your assumption, which is evidently that the entire thing turns into a compressed and decontextualized utility function ("algorithm") is ignoring the entire hypothetical.
Yeah, I'm only unsurprised because I've been tracking other visual reasoning tasks and already updated towards verbal intelligence of LLMs being pretty much disconnected from spatial and similar reasoning. (But the visual classes of task seem not obviously harder, and visual data generation is very feasible at scale, so I do expect reasonably rapid future progress now that it is a focus, conditional on sufficient attention from developers.)