Rank: #10 out of 4859 in peer accuracy at Metaculus for the time period of 2016-2020.
Why do you think forecasting data is limited? You can forecast all sorts of different events that currently don't have existing forecasts made on them.
You don't need to be smarter in every possible way to get radically increase in speed to solve illnesses.
I think part of the motive of making AGI is to solve all illnesses for everyone and not just people who aren't yet born.
Either increased adenosine production or decreased adenosine reuptake in fascia seems to be a plausible mechanism for the fatigue we observe in ME/CFS after sport. Adenosine is anti-inflammatory so there's reason for why the body might upregulate it for example when a COVID/lyme infection produces a lot of inflammation. It would explain the symptom of CFS.
Someone should run the experiment of whether adenosine in fascial tissue is increased in those patients after exercise.
A quick case for BGP: Effective reprogenetics would greatly improve many people's lives by decreasing many disease risks.
Does this basically mean not believing in AGI happening between the next two decades? Aren't we talking mostly about diseases that come with age in people that aren't yet born so the events we would prevent are happening in 50-80 years from now, where we will have radically different medical capabilities if AGI happens in the next two decades?
Given that most of the models value Kenyan lives more than other lives, this is a quite interesting thesis that Kenyan language use drives LLM behavior here.
I do think that using "It's" or "It is" is part of the pattern.
I made the change.
When it comes to medical questions, a patient might ask a chatbot medical question with the intent to solve their medical issue. On the other hand, someone might ask the chatbot a medical question to understand
If I ask my friend "Do you think I should vaccinate my child?" I could be asking it because I want to make a decision about vaccination. I could also ask the question because I want to evaluate whether or not my friend is an antivaxxer.
Most humans have an understanding that a lot of questions that are asked of them have the intention of evaluation whether they belong to the right tribe, and act accordingly. That's a pattern that the AI is going to learn from training on a large corpus of human data.
We do have the existing concept of simulacrum levels, evaluation awareness is about guesses that the simulacrum level of a question isn't 1.
If you look at your last post on LessWrong it starts with:
"We are on the brink of the unimaginable. Humanity is about to cross a threshold that will redefine life as we know it: the creation of intelligence surpassing our own. This is not science fiction—it’s unfolding right now, within our lifetimes. The ripple effects of this seismic event will alter every aspect of society, culture, and existence itself, faster than most can comprehend."
The use of bold is more typical of AI writing. The ':' happens much more in AI writing. The emdash happens much more in AI writing, especially with "is not a X it's a Y".
Emdashes used to be a sign of high-quality writing where a writer is thoughtful enough to know how to use an emdash. Today, it's a sign of low-quality LLM writing.
It's also much more narrative driven than the usual opening paragraph of a LessWrong post.
A huge problem is that there's a lot of capital invested in influencing public opinion. In the last decade, projects that on the surface look like they are about improving epistemic norms have usually been captured to enforce specific political agenda.
Information warfare is important for the shape of our information landscape. You had the Biden administration on the one hand spreading antivax information in Malaysia and asking Facebook not to take down the bots they were using to spread their antivax information while at the same time pressuring Facebook to censor truthful information about people talking about side effects they personally have experienced from vaccines from which Western companies as opposed to China profits.
As far as Wikipedia goes, it's important to understand what it did in the last years. As Katharina Mahar said, truth is not the goal of Wikipedia. It's to summarize what "reliable sources" say. When what Wikipedia considers to be reliable sources on a topic have a bias, Wikipedia by design takes over that biases.
Wikipedia idea of reliable sources where a Harvard professors publishing a meta review can be considered less reliable than the New York Times is a bit weird but it's not inherently inconsistent.