This still included other algorithmically determined tweets -- from what your followers had liked and later more generally "recommended" tweets. These are no longer present in the following tab.
I'm pretty sure there were no tabs at all before the acquisition.
Twitter did use an algorithmic timeline before (e.g. tweets you might be interested in, tweets people you followed liked), it was just less algorithmic than the "for you" tab currently. The time when it was completely like the current "following" tab was many years ago.
The algorithm has been horrific for a while
After Musk took over, they implemented a mode which doesn't use an algorithm on the timeline at all. It's the "following" tab.
In the past we already had examples ("logical AI", "Bayesian AI") where galaxy-brained mathematical approaches lost out against less theory-based software engineering.
Cities are very heavily Democratic, while rural areas are only moderately Republican.
I think this isn't compatible with both getting about equally many votes. Because much more US Americans live in cities than in rural areas:
In 2020, about 82.66 percent of the total population in the United States lived in cities and urban areas.
https://www.statista.com/statistics/269967/urbanization-in-the-united-states/
It's not that "they" should be more precise, but that "we" would like to have more precise information.
We know pretty conclusively now from The Information and Bloomberg that for OpenAI, Google and Anthropic, new frontier base LLMs have yielded disappointing performance gains. The question is which of your possibilities did cause this.
They do mention that the availability of high quality training data (text) is an issue, which suggests it's probably not your first bullet point.
Ah yes, the fork asymmetry. I think Pearl believes that correlations reduce to causations, so this is probably why he wouldn't particularly try to, conversely, reduce causal structure to a set of (in)dependencies. I'm not sure whether the latter reduction is ultimately possible in the universe. Are the correlations present in the universe, e.g. defined via the Albert/Loewer Mentaculus probability distribution, sufficient to recover the familiar causal structure of the universe?
This approach goes back to Hans Reichenbach's book The Direction of Time. I think the problem is that the set of independencies alone is not sufficient to determine a causal and temporal order. For example, the same independencies between three variables could be interpreted as the chains and . I think Pearl talks about this issue in the last chapter.
Tailcalled talked about this two years ago. A model which predicts text does a form of imitation learning. So it is bounded by the text it imitates, and by the intelligence of humans who have written the text. Models which predict future sensory inputs (called "predictive coding" in neuroscience, or "the dark matter of intelligence" by LeCun) don't have such a limitation, as they predict reality more directly.