Over a year ago, I posted an answer somewhere that received no votes and no comments, but I still feel that this is one of the most important things that our world needs right now.

I wish to persuade you of a few things here:

  • that truthseeking is an important problem,
  • that it's unsolved and difficult, 
  • that there are probably things we can do to help solve it that are cost-effective, and I have one specific-but-nebulous idea in this vein to describe.

Truthseeking is important

Getting the facts wrong has consequences big and small. Here are some examples:

Ukraine war

On Feb. 24 last year, over 100,000 soldiers found themselves unexpectedly crossing the border into Ukraine, which they were told was full of Nazis, because one man in Moscow believed he could take the country in a few days. A propaganda piece was even published accidentally about the Russian victory: "Ukraine has returned to Russia. Its statehood will be [...] returned to its natural state of part of the Russian world." Consensus opinion is that Putin made a grave mistake. Instead of a quick win, Putin took about 7% more of Ukraine than he had already taken, then lost some of that, while taking over 100,000 Russian casualties and losing over 8,000 pieces of heavy military equipment (visually confirmed) — all in the first year. He later lost some of the land he stole, though he still has about 6,000 nuclear weapons including the Poseidon "doomsday device", and hopes veiled nuclear threats plus hundreds of thousands of conscripts will bring "victory". And by "victory", I mean well over 100,000 people have already been killed, plus many more indirectly, while Ukraine suffers hundreds of billions of dollars in damage, and the world economy suffers similarly, including Russia itself.

Residential area of Mariupol after one month of Russian assault

Global warming

In 1979, the Charney report summarized the emerging consensus that CO2 causes global warming. Still, Republicans didn't want subsidies on renewables or carbon taxes, while Democrats and the media went on treating carbon-free nuclear power as if it were more dangerous than fossil fuels, even though its safety profile looks almost like this:

For decades after the Charney report, no new nuclear reactors were approved in the U.S., and it took over 30 years for solar and wind power to become economical. In the meantime, as CO2 accumulated in the atmosphere, human CO2 emissions nearly doubled:

(View charts: CO2 emissionsprimary energy)

44 years later, Germany still plans to close its nuclear plants years before its coal plants, though after the attack on the Nord Stream pipeline, they put off closing the last two nuclear plants… for a few months. In those 44 years, Earth's surface warmed 0.76°C and its land warmed nearly 1.2°C (GISS). Land will continue warming faster than sea, and CO2 emissions keep trending upward, although much of that CO2 will stay in the atmosphere even if we stop adding more.

Covid Vaccines

My brother and I tried to convince my 74-year-old father he should get vaccinated, but my dad's sources told him 5,000 people had already been killed by Covid vaccines, and he trusted his sources far more than either of us. I argued that 5,000 entries in the VAERS database is unsurprising due to base rates: natural deaths happen all the time, and some should by coincidence happen after vaccination. He dismissed the argument, saying that no one would give a vaccine to someone who was about to die. He ignored my followup email and would go on to ignore many, many more of my responses and questions, though he did later tell me that actually Covid vaccines had killed 100,000 people.

In September 2021, my uncle and former legal guardian fell down in his garden. Passers-by saw this and called an ambulance. At the hospital, he tested positive for Covid and was treated for it. He had chosen to be unvaccinated on the strength of anti-vax arguments. Later, he was put on a ventilator and a brain scan suggested a stroke. Finally, he died. His widow and my father concluded that a stroke killed him—though the widow lamented to me, eyes watering slightly, that perhaps if the doctors had allowed her to give him ivermectin, he would still be alive today. I suggested she fill out a form to get more information about what happened to him in the hospital; she declined.

Soon afterward, my father's favorite anti-vaxxer, Marcus Lamb, died of Covid and his widow did not suggest any other cause or comorbidity in his death. My father's opinion was unaffected. That Christmas I sent my dad a copy of Scout Mindset, but it was too late; he became more convinced than ever that vaccines were the Real Threat. Just after reading the book, he told me on the phone that the author of Scout Mindset "overthinks things" and immediately sent a 4-page anti-vax gish gallop he wrote himself. Later, after ignoring all of the comments and dozens of questions I responded with, he said I don't have a "scout mentality". Though in 2021 he insisted that the risk of him getting Covid was minimal, he got Covid in 2022 as I expected—and thus, he said, better immunity than any vaccine.

I hope all this illustrates the real-world importance of epistemology, or in other words, truthseeking.

We can't solve all epistemological problems. People will always act kinda dumb. But by broadly chipping away at epistemological problems, by "raising the sanity waterline", I think we can make society function better.

Truthseeking is difficult

It seems to me that most people think figuring out the truth is easy. That's how people talk about politics online: one person talks about how obviously not dangerous Covid is, or how obviously dangerous vaccines are; another talks about how obviously wrong the first person is about that.

I spent some years arguing with people I call "global warming dismissives", who you might know as "skeptics" or "deniers". For instance, have you heard that global warming is caused by cosmic rays? By the sun? By the "pressure law" or "underground rivers of lava", as I've heard? By natural internal variability? By CO2 emissions from volcanoes? By "natural cycles"? Or maybe CO2 emissions from oceans? That CO2 doesn't cause warming? That CO2 causes warming, but CO2 levels naturally change quickly and erratically over the centuries for unknown reasons, as shown by plant stomata? That CO2 causes warming, but only very little? That the lines of evidence showing global warming in the last 100 years are all fraudulent, except for the UAH record which shows less warming? That maybe global warming is happening, and maybe humans are causing it, but it's a good thing and nothing should be done? 

This is a wide variety of perspectives, but people with these beliefs all tend to like the web site WattsUpWithThat. They are united not by what they believe, but what they disbelieve: their denial. (But one of the most popular stories is that the word "denier" is meant to associate them with "holocaust deniers", so anyone who says "denier" is an intolerant bigot. Since these are not skeptics in the scientific sense, I call them "dismissives" instead).

They're not dumb. Many of them are highly intelligent. And you may think they are fringe, but their beliefs are extremely influential. Despite the 1970s consensus, a 2010 poll (over 30 years later) found that just 34% of Americans thought humans cause global warming.

One way to counter this sort of thing is the SkepticalScience approach—a web site run by volunteers, devoted to refuting myths about global warming. I do think this site has been influential. But it seems to me that most topics that have "dismissives" or "pseudoscientists" don't have any equivalent of SkepticalScience. Also, the way SkepticalScience makes its judgements is opaque; readers are faced with a puzzle of how much they should trust it; it is not obvious that SkepticalScience is more trustworthy or impartial than WattsUpWithThat. Plus, current volunteers at SkS often don't have the energy to keep up-to-date with the latest science, leaving much of the content several years out of date.

When you branch out from global warming to look at other questions of fact, such as "whether homeopathy works and is not a placebo effect", "whether 9/11 was an inside job", "whether nuclear power is at least as bad as fossil fuels", "whether all vaccines are bad (or just all Covid vaccines)", "whether humans evolved naturally from an earlier primate", and so on, it should least be clear that the truth isn't easy for people to discern, and taking the outside viewyou should not expect to be good at it either.

And it's getting harder

In the past, photos have been strong evidence of things. Slowly, "photoshopping" became a popular means of falsifying evidence, but this kind of forgery takes a lot of skill to do well. Thanks to AI image generation, that's changing fast. Thanks to deepfakes and video-generation models, even video soon won't be good evidence like it used to be. And thanks to huge language models, it is becoming practical to produce BS in immense quantities. Right now, many Russian claims have the virtue of being obviously suspicious; we can't count on that forever.

I would rest easier if we had more ways to counter this sort of thing.

Truthseeking is an unsolved problem

I read and enjoyed Rationality: A-Z, all the most popular essays by Scott Alexander, Scout Mindset and other scattered materials, and I have to say: for a movement that considers Bayesian reasoning central, it's weird that I've never seen a rationalist essay on the topic of the evidential value of a pair of sources, and almost no essays about how to choose your initial priors, or how update them in detail.

Consider a pair of sources: your friend Alice tells you X, and then your friend Bob tells you X. Is that two pieces of evidence for X, or just one? This seems like an issue that is both very thorny and very important to the question "how to do Bayesian reasoning", but I've never seen a rationalist essay about it. (I don’t claim that no essay exists; it’s just that being “rationalist” for five years somehow wasn’t long enough to have seen such an essay.)

Consider also the question of how to decide how much to trust persons or organizations. This is a central question in the modern era because the territory is much bigger than in our ancestral environment (the whole planet, over 7 billion people and endless specialties), so any knowledgeable person must rely mainly on reports from others. An essay related to this is The Control Group is Out Of Control, which tells us that even in science (let alone politics), this is a hard problem. Even if you're trying to stick with a high standard of evidence by citing scientific papers, you might end up citing the bullshit papers by mistake. In my time exploring the climate deniosphere, I've come across a large number of bad papers or contra-consensus papers. Even if the consensus is 97% on global warming you should expect this, because there are over 100,000 peer-reviewed papers about climate, a substantial fraction of which give an opinion on global warming. It's not much different in the anti-vax space, where anti-vaxxers have at least one "peer-reviewed journal".

Even in a community working as hard as rationalists do to be rational, we’re still amateurs and hobbyists, so even when key questions have perfectly good answers out there in the broader rationalist community, most of us don't have the time or discipline to discover and learn those answers. You can't expect amateurs to have read and internalized thousands of pages of text on their craft; they have day jobs. So even in rationalism, or in the EA community, I think we have plenty of room to improve.

If you encounter a garbage paper, how do you know it's garbage? Right now, you have to suspect that "something is wrong" with it and its journal. You might, say, comb through it looking for error(s). But that's hard and time-consuming! Who has time for that? No, in today's world we are almost forced to rely on more practical methods such as the following: we notice that the conclusion of the paper is highly implausible, and so we look for reasons to reject it. I want to stress that although this is perfectly normal human behavior, it is exactly like what anti-science people do. You show them a scientific paper in support of the scientific consensus and they respond: "that can't be true, it's bullshit!" They are convinced "something is wrong" with the information, so they reject it. If, however, there were some way to learn about the fatal flaws in a paper just by searching for its title on a web site, people could separate the good from the bad in a principled way, rather than mimicking the epistemically bad behavior of their opponents.

The key to making this possible is sharing work. Each person should not decide independently whether a paper is good or not. We kind of know this already; for instance we might simply decide to trust one community member’s evaluation of some papers. Since this is similar to what other communities (which, we know, often reach wrong conclusions) do, I propose being suspicious about this approach.

The information-misinformation imbalance

Misinformation has a clear business model: Substack may pay over a million dollars annually to the two most popular antivaxxers. And while Steve Kirsch likely earns over $100,000 on Substack subscriptions per year, Gift of Fire's debunking of Steve Kirsch's most important claim has 363 "claps" on Medium (Medium allows up to 50 claps per reader, so we know at least 8 people liked it).

An individual who investigates stuff, but isn't popular, has nowhere they can put their findings and expect others to find them. Sure, you can put up a blog or a Twitter thread, but that hardly means anyone will look at it. Countless times I have been pointed to an outlandish claim treated as obviously true by some science dismissive, Googled for debunkings, and come up empty-handed. There seems to be a point where bullshit is popular enough that lots of people know about it and share it as fact, but either (1) not popular enough for anyone to debunk or (2) debunked by someone who is not popular enough for Google to show.

Individual research is demanding on the individual, but often insufficient

Even if you find a debunking, that's just the beginning. How can you tell whether the original post is correct or the debunk is correct? Ideally you would read both and decide which one is more believable. But this is time-consuming and not even sufficient: What if the original writer has good a debunk of the debunk that you didn't discover? What if the debunking wasn't very good, making the original seem undeservedly good? How do you know which author is more trustworthy, beyond the plausible-soundingness of their arguments? Maybe somebody is misrepresenting. It seems like, to do a good job, you need to investigate the character of each of the authors involved.

Or suppose you start from a question, like "is the linear no-threshold hypothesis a reasonable model of radiation damage?" If you do the research yourself, you might not find an answer conclusively, except perhaps one that confirms your own bias. When I investigated this question, I found that some search queries on Google Scholar mostly gave me results rejecting the hypothesis, while others gave a bunch of results that defended it. It took years before I stumbled upon something more important: actual (approximate) numbers about risks of radiation.

Most people don't actually have enough time to find obscure facts or verify them; doing it well is hard, time-consuming, and not a topic covered by The Sequences: Yudkowsky treated rationalism as an individual endeavor, envisioning "rationalist dojos" where each individual would try to maximize their rationality. That's a fine idea, but it's an inefficient way to raise the sanity waterline. We could reach more accurate beliefs via organized effort than if we each, individually, browse the internet for clues as to what the truth might be.

So, how can we pool our resources to find truths more efficiently? 

Computer-aided truthseeking would be easier

We have some computer tools for truthseeking, but they have a lot of limitations:

  • Wikipedia only covers "noteworthy" topics, excludes primary sources and original research, mostly doesn't cover bunk or debunks, and is subject to the biases of its semi-anonymous authors and editors.
  • Google weights pages by a secret formula, one affected largely by PageRank. Search results may not be accurate or trustworthy, and trustworthy results by obscure/unpopular authors may not be found or highly-ranked. Information isn't categorized (e.g. I can't specify "government sources"). Google has a bias toward the present day, and it's impossible to tell what a search query would have returned had it been input years ago.
  • Google scholar finds only academic papers, which tend to be hard to understand because the general public is not their intended audience, the authors are rarely skilled communicators and they often prefer "sounding scientific" over plain language. Only sometimes do search results directly address your question. And while papers are more scientific than the average newspaper article... the replication crisis is a thing, and The Control Group is Out of Control. So one can't simply find a paper and expect it to provide the whole answer, even if it's a meta-analysis.
  • StackOverlow and Quora offer answers, but answer scores/rankings are weighted by the opinions of readers, which is not a reliable source of truth.
  • Lots of information just isn't publicly available on the internet (e.g. copyrighted books, reports, certain building codes), or is only available in machine-readable form (e.g. csv files) where it's hard to find or visualize.

What if there were a tool that was more directly useful for finding truth?

I think we should build such a thing. Done properly, a comprehensive tool would be useful not just for rationalists, but for the general public too.

Expected value

Since I'm posting to EA forum, I feel like I should say something about how "valuable" a tool like this could be in dollars or QALYs or something... but I don't know how. How would you measure the value of Wikipedia, or its impact on catastrophic risks? Clearly it's valuable, but I don't know how to even begin to measure it. 

I view the current epistemic environment as a risk similar in magnitude to climate change. Higher global temperatures create a myriad of risks and costs, none of which can directly cause a catastrophe by themselves. Instead, higher temperatures raise the general risk of conflict, strife and poverty in the world by making life harder or less pleasant for a variety of people. I have funded clean energy efforts not because climate change will directly cause human extinction, but because the rising "risk waterline" is a general threat to the health of society and to the natural environment.

The current epistemic environment is risky in the same way. Some people storm the capitol building because the election was "stolen". Others fight against the perceived threat of nuclear energy, or even wind turbines. Another large group risks their lives over vaccines. Still others fight against utilitarianism. Some are "red pilled", others "black pilled". People seem driven by tribalism, memes and underfunded journalism. Society offers limited means for the average person to improve upon this.

But compared to climate change, epistemological interventions are highly neglected.

My idea is just one of many projects that could help raise the sanity waterline, but if successful, I think its value would be comparable to Wikipedia. It would cover domains that Wikipedia doesn't cover in depth, and reaching people that Wikipedia doesn't reach for culture-war reasons. The value of Wikipedia is hard to measure, but intuitively large. My idea is the same way.

My idea

I think a site should be built for crowdsourced evidence storage, mirroring, transformation, visualization, aggregation, analysis, bayesian evaluation, and search.

This would be a hard project. It would require a bunch of math (software bayesian reasoning), game theory (to thwart malicious users and bad epistemics), UX design (so it's pleasant to use), "big data" (the database would become immense if the project is successful), and a very large amount of code. I am definitely not qualified to design it all myself, but I'll put down some ideas to give you a flavor of my idea.

Let's call it the "evidence dump".

It would be one of the first attempts at “computational epistemology”—to my knowledge. Wikipedia’s page on computational epistemology lists publications going back several decades, yet I’d never heard of it (I just Googled it to see what I would get). But Google gives zero results for "applied computational epistemology" which is what I'm going for: a site that uses epistemological theory to aggregate millions of judgements.

I hope that this site will be informed by rationalist thinking and epistemological theory, and that it will in turn help inform rationalists and AI researchers about epistemological processes relevant to them. For example, emergent failure modes of the site’s algorithms should teach us something. (It is not clear to me whether improving AGI epistemology would be good or bad with respect to x-risk, but otherwise, improving AI epistemology seems like a good thing.)

I don’t expect the system to change the mind of extremists or people like my father. But my hope is that

  • People who do care about truth will appreciate that this is a tool that helps them (or at least, will help them after it becomes popular; how to go from zero to popular is a total mystery)
  • Some people will appreciate that it gives them a voice to share their knowledge and experiences, even when there isn’t a big userbase
  • It will gently guide various people away from extreme political thinking
  • The rest of society will somehow be affected by truthseekers getting better information, e.g. if Metaculus forecasters collectively record, aggregate and view all their observations on the system, maybe the result would be more accurate forecasts, due to each individual forecaster seeing more relevant, better-organized information, and those better forecasts in turn would benefit society. Granted, Metaculus already has an ordinary comment system and it’s unclear how to convince people to record more evidence here than they already record there.

Evidence storage

The evidence dump is like Twitter for evidence, summaries and claims.

You can post evidence in two basic forms: either as an eyewitness account (like that story about my father), or as documentation of evidence/claims posted elsewhere.

Direct evidence entry example:

  • Category: crime [posted by Username on March 6, 2022 at 6PM]
    • On March 4, 2022 at ~11 PM, I saw a man break the passenger-side window of a Ford Taurus on Frankfurt St near 12 Avenue in <City> [software autolinks to a map]
    • He rummaged inside, put something in his pant pocket, and ran off
  • Category: Common knowledge among: software developers
    • Dynamically-typed languages typically run more slowly than statically-typed languages.

They say anecdotes are not evidence, but if a million people could be persuaded to record thousands of anecdotes, I think some real signals will be found in the noise. If, in turn, those signals are evaluated within a system designed well enough, perhaps correct analysis will win more views and popularity than incorrect analysis.

"Common knowledge" could be a potentially useful sub-database of the site, if the site becomes popular. One way this could work is that new users are invited to tell us their fields of expertise (with some limits on how many fields one can be an expert in), rate the truth of statements from other people in the same field, and then to "Name a fact that almost everyone in your field knows, which isn't known or consensus among the general public."

Another sub-database that the site could have is a sort of "yellow pages" of products, databases and tools, e.g. "This URL has a database of climate science data", "This URL has a tool for doing task X".

Claim/summary example:

Claims from Wikipedia should be generated automatically or semi-automatically.

  • Academic paper: Doran 2009 
  • Topic: Climate change | consensus
    • Summary: 75 of 77 respondents to the Doran 2009 survey of mostly North American scientists “who listed climate science as their area of expertise and who also have published more than 50% of their recent peer-reviewed papers on the subject of climate change” agreed that “human activity is a significant contributing factor in changing mean global temperatures.” 
    • However, the total number in the category was 79. A reasonable interpretation of this data is that when asked “do you think that mean global temperatures have generally risen,” two said no and then didn’t answer the question about whether “human activity is a significant contributing factor” and therefore weren’t counted. This interpretation of the data suggests a 95% consensus rather than 97%.

Ideally, the site could gather information about scientific papers automatically:

  • Journal: Eos, Vol. 90, No. 3, 20 January 2009
  • Not retracted.
  • 923 citations

It's often tempting for users to provide more than just a summary, but also some kind of analysis, as seen here where N=79 but somehow "75 of 77" agreed with the consensus statement. I suppose the analysis part ("A reasonable interpretation is...") should be split off somehow from the summary proper, and that the site must make such splitting easy to do.

It's important that claims and summaries be brief, because they are re-used in aggregation and analyses. If a summary needs to be long, it can be broken into parts that can be referenced and rated individually.

Different people would be able to submit competing or complementary summaries of the same source document, with some kind of voting to find the best summaries, and some way to flag errors in summaries and downrank them on the basis of accuracy.

Summarizing claims by cranks and conspiracy theorists is good and encouraged; at this level the site is merely cataloging, not judging.

Claims based on books should provide an excerpt from the book that supports the claim, perhaps as a photo.

Mirroring and transformation

I want a database of databases (or database of datasets). Let's consider Covid-19 pandemic data: each country (or province) produces its own bespoke database which can typically be viewed on a bespoke country-specific web site (which provides some visualizations and not others), and is typically also offered in a bespoke electronic form.

At a minimum, one could publish links to all the various web sites and machine-readable datasets about a topic on the evidence dump.

Often there is someone who volunteers to gather and transform data from many places with a bespoke program published on GitHub. If so, someone should be able to import the output of that program into the evidence dump (and other people should be able to review the output and claim that it accurately reflects particular sources or not: "data about X matches source Y").

Ideally, the evidence dump itself would support some kind of data import process that involves transformations on the source data to produce a common schema.

Aggregation

Sources can be grouped. Ideally we would find ways to do this semi-automatically. Example:

  • Studies addressing the question "<question>?"
    • Study 1
      • Claim/conclusion
    • Study 2
      • Claim/conclusion
    • Study 3
      • Claim/conclusion
    • Study 4
      • Claim/conclusion

More broadly, lists of things in categories are useful. People should be able to publish lists about any topic, and other people should be able to vote about certain aspects of the items on the lists. If it's a list of tools for task X, for example, people could vote on how good tool T is for task X.

Analysis

To publish an analysis, users would take a bunch of claims, and write some logic involving those claims to produce a conclusion.

In the first draft of this proposal, I suggested a very precise writing style similar to legalese:

  • Humans have been adding CO2 to the atmosphere, mostly by burning fossil fuels
    • Reference(s) on this topic
  • Natural carbon sources and sinks, such as oceans and plants, are net absorbers of CO2 from the atmosphere every year
    • Reference(s) on this topic
  • There are no major CO2 emitters except humans (including man-made machines) and natural carbon sinks
  • Therefore, given humans are responsible for the increase of CO2 in the atmosphere in the modern era (20th/21st centuries)
    • This reasoning relies on the law of conservation of mass and the fact that CO2 is not created or destroyed in the atmosphere itself.

Here it says "net absorbers of CO2 from the atmosphere" rather than just "net absorbers of CO2"; there is an exhaustivity clause (third point); it has the phrase "including man-made machines" to clarify that we're not just talking about what comes directly from human bodies; and the law of conservation of mass is mentioned.

I was thinking that politically sensitive topics need careful, complete, qualified wording that is correct and complete so that the conclusion cannot be reasonably contested, and then the voting system would help such careful treatments to rise to the top.

But this seems unacceptable: people don't want to read or write stuff like this. Clear but easy-to-read statements should rise to the top, or the site can't gain popularity. However, caveats and clarifications are important; they need to be included and preserved. I am unsure how to achieve the right balance.

The result of an analysis can be used to support other analyses, forming a DAG (directed acyclic graph) of analyses, i.e. a network of proposed conclusions.

But before we move on, notice that the third clause is a negative statement ("There are no..."). They say you can't prove a negative, but you kinda can: as long as no one can find credible evidence against it, negative statements should generally stand.

I don't quite know how, but 

  • An analysis should be somewhat resistant to cherry picking. For example, an author could cherry pick two papers that support a desired conclusion, but other users should be able to add additional scientific papers that throw a wrench into the analysis.
  • Users should be able to suggest constructive modifications to an analysis to fix flaws or omissions; other users and the author could then vote on them.
  • There must be ways to resolve conflicts in the underlying data, and I'm not sure how. For instance, let's say we aggregate all the papers on ivermectin efficacy against Covid. But, some papers said it's effective, others said it's not effective, and others said it may be effective but statistical significance wasn't reached. The simplest possible technique would be to count the number of papers saying "effective" vs "ineffective"/"harmful", and since more papers suggest "effective" than "ineffective"/"harmful", "effective" wins. But this is insufficient:
    • the papers vary wildly in quality, sample size, and statistical methods
    • a couple were fraudulent/retracted
    • there are more possible interpretations than just "effective" and "ineffective", e.g. WORMS!
  • It seems to me that the site needs to have a concept of paradigms of interpretation (hypotheses or theories) and bring them together to compete against each other. For ivermectin, one paradigm would be "Ivermectin is effective against Covid" and another would be "Ivermectin is effective against worms, which incidentally helps people with Covid if they also have worms. However, ivermectin is not very effective against Covid itself". A third would be "it's complicated. While many papers suggest it is effective, the body of evidence about ivermectin includes some fraudulent papers and papers with poor methodology, papers whose results may have been affected by Strongyloides infections, and was on the whole affected by publication bias”. Then, users can help rate how well the available evidence fits each paradigm.
    • Statements of uncertainty, like "It's unclear if ivermectin works", are inappropriate paradigms, but you could say "The evidence that ivermectin works is similar in strength to the evidence that it doesn't work". Or, we might see similar scores (assigned by crowd review) to "Ivermectin is effective" and "Ivermectin is ineffective" paradigms, which also implies that it's unclear if ivermectin works.
    • Or, let's consider global warming, where paradigms could include "it's caused by the sun", "it's caused by natural internal variability", "it's caused by greenhouse gases emitted by humans", and "the apparent temperature changes were faked by a global conspiracy". Any user can evaluate any piece of evidence against any paradigm, hopefully allowing the correct paradigm to rise to the top.
  • Creating an analysis should not be much more onerous than it needs to be; a sufficiently friendly user interface is necessary and may be tricky to achieve.
    • A user should be able to convert an essay they have written into an analysis. For this purpose I'm guessing that users will need to be able to edit multiple levels of analysis at once, and/or have editable claims or 3rd-party evidence nested inside an analysis.

One more thing is that it seems like an analysis should be able to support or cast doubt on a source: you should be able to argue "source X produces lots of bunk because <reasons>", and if the community agrees, "source X" gets a lower score/reputation, and that lower score would then affect the rating of other analyses that rely on that same source.

Shopping

Wait, shopping? Does this really belong here?

A few months ago I wanted to find a 12V microwave, stove, or slow cooker with input power between 200W and 700W (with 110V AC as a nice-to-have) and capacity over 2L.

These are impossible products to find. When you search for this sort of product, you find food warmers under 100 watts, microwaves over 1000 watts, propane stoves, space heaters, food thermoses, and contraptions I don't even recognize. After looking at a few hundred product links and a couple dozen product pages I found... not one single item matching this description. Sites such as Google Shopping, Amazon, and AliExpress do not seem to support even the most basic constraints on search results, such as "these word(s) must be in the product title" or even "these word(s) must be somewhere in the product description". I'm baffled and frustrated that online shopping could be this bad decades after the concept was invented.

Seems to me that if a shopping site could actually find what you're looking for, that could be worth money.

There are some keys to this sort of thing working well.

  1. We need products in a database with a consistent schema. This requires some code; if a product has an input voltage of "12V DC" and an input current of "10A", for example, an "input wattage" of 120W can be inferred (the fact that it's inferred rather than measured being an additional piece of information that ought to be stored). But could we build a site with a means to standardize schemas like this? Could users and/or companies be convinced to add their household products to the database? Could we resolve disputes via voting? For example, if the specs to a dosimeter say it handles 1000µSv/hour but a user finds that it severely undermeasures above 300µSv/hour, this could override the manufacturer claim to become the default value in the database if the evidence is good enough (user has high reputation or other users in good standing corroborate the claim). And perhaps the site can compute a probability distribution on the max dose rate field, and feed that into the search ranking algorithm. And perhaps users could propose new fields representing different thresholds like "max radiation measured within 2x of reality" and "maximum radiation reading ever witnessed", figures that could be crowdsourced by hobbyists.
  2. The plain-text search system should connect with the schema. If I search for "12V microwave, stove, or slow cooker, with input power between 200W and 700W and capacity over 2L", I imagine the site could guess some search constraints and propose them to me.
  3. As always in internet search, ranking order is important and needs some thought.
  4. There should be product comparison matrices, where you select some products and then can create a data table comparing their specs. With charts?

I bring up "shopping" for a couple of reasons. First, areas like this could potentially have some kind of business model, so that the site doesn't have to be funded entirely by donations.

Second, the same features are useful for any kind of research. It makes just as much sense to make a consistent schema, an excellent search system and a comparison table for a list of mines, for example:

Search: [tag 'mine'; 'Common Name' contains 'Quarry'                ]
Common Name   | Country | Type                               | Annual Production
XYZ Quarry    | Germany | fossil fuel > coal > lignite (etc) | 134,000,000 tons (2021)
ABC Quarry    | DRC     | precious > diamond                 | 1327 kg (2018)
...

Among the many challenges in making a system like this is crowdsourcing everything, making everything up for debate, and yet still making something with a schema consistent enough to be useful, with "best-guess" data accurate enough to be useful. Each individual cell in this table could potentially have dozens of pages of debate and hundreds of votes affecting it, with just the "best guesses" shown when a user views a table. And of course, there may be dozens or hundreds of columns that are not shown.

Bayesian evaluation

Here's where I'm stumped: we need transparent algorithms to grade the analyses and claims under various paradigms, in ways that point us toward the truth. I'm not sure how.

The voting systems on the site, whatever they are, should discourage partisan thinking ("like/dislike", "agree/disagree") in favor of analytical thinking ("does the conclusion follows from the premises?", "does this claim support/refute this statement?").

Nevertheless, I would assume that in the long run people will try to game the system via lying and motivated reasoning—votes that are lies, summaries that misrepresent the source material accidentally or deliberately, fabricated sources, bad analysis, and spamming. For this reason, a user reputation system (like StackOverflow) is also needed.

It seems like there should be some way to evaluate third-party claims, but it's not clear how to do that in the plan laid out above. For example, ideally the software could detect, not just declare by fiat, that Fox News or MSNBC editorials are less reliable than Wikipedia, PBS or Reuters.

Finally, of course, I want some kind of automatic Bayesian reasoning to compute estimates of how likely various ideas are to be true, at least in some cases. How to do this... let's call it an exercise for the reader.

Information will be linked together, ranked and structured in ways that facilitate searching. For example, high-rated paradigms should be listed high on search results. 

Also, pieces of evidence will have various data fields attached, such as dates, locations, categories and tags, that should be searchable.

Charts

I'm constantly looking for charts. I like to ask Google images because sometimes it finds the answer quickly. But Google images doesn't seem to think charts/graphs are important; there are search categories for "Clip art", "line drawing" and "GIF" but not "chart". Often it finds something on Statista, which then asks for money to actually see a chart. Often the data I want to see is freely available, but the chart is not. Finding data in non-chart form is hard though; often I just give up.

Ideally, the evidence dump would have its own tools to easily visualize data. I'm inspired somewhat by Power BI. I find parts of Power BI painful, but once you have data in the right form, you can create a pretty wide variety of useful charts and tables very quickly and easily. Plus, you can combine multiple charts and tables and control widgets in a "report".

I imagine users creating visualizations via some drag-and-drop interface, and then publishing them with an informative title, so other people can find them via site search or via Google image search. The result: a volunteer-run interactive Statista without the paywall.

Theory of growth?

It would be unwise to build a site run by a volunteer community without a plausible theory for how the community will materialize. This will be a system with network effects, where a very big site is about quadratically more valuable than a small one.

I haven't really worked this part out, but I imagine a key aspect is that early versions of the site need to seem somehow more useful, valuable or fun than an ordinary blog for the people writing stuff in it. I guess it should initially be tailored to only one piece of its mission — one that doesn't depend a lot on network effects.

This won't happen

I'm a senior software developer (CTO, technically) with about 20 years' experience. My job is to build high-quality software that doesn't benefit the world in any way. I'd rather make different software, but for that I need funding. As it is, I was too busy for the last six months to even publish this article. Assuming charity funding isn't available, I'm looking for ideas on

  • for-profit business models to build any part of this idea that could eventually become part of a non-profit evidence dump.
  • people who could act as my cofounder in a startup in a market related to this
  • specifics of the user interface, data model or epistemological methods (if you have expertise in computational epistemology, please help!)
  • ways this idea could interact/intersect with related projects such as Metaculus and LessWrong
New Comment


45 comments, sorted by Click to highlight new comments since:

Your examples seem suspiciously political, not a good sign. They also use connotation-heavy language, another red flag for anyone interested in "truth seeking". So, I figured I'd ask you how is your personal truth-seeking going? What important updates in your worldview come to mind as a result of it? Or is it only the other people who are not good at collecting and organizing evidence?

Most important matters have a large political component. If it's not political, it's probably either not important or highly neglected (and as soon as it's not neglected, it probably gets politicized). Moreover, if I would classify a document as reliable in a non-political context, that same document, written by the same people, suddenly becomes harder to evaluate if it was produced in a politicized context. For instance, consider this is a presentation by a virologist. Ordinarily I would consider a video to be quite reliable if it's an expert making a seemingly strong case to other experts, but it was produced in a politicized environment and that makes it harder to be sure I can trust it. Maybe, say, the presenter is annoyed about non-experts flooding in to criticize him or his field, so he's feeling more defensive and wants to prove them wrong. (On the other hand, increased scrutiny can improve the quality of scientific work. It's hard to be sure. Also, the video had about 250 views when I saw it and 576 views a year later—it was meant for an expert audience, directed to an expert audience, and never went anywhere close to viral, so he may be less guarded in this context than when he is talking to a journalist or something.)

My goal here is not to solve the problem of "making science work better" or "keeping trivia databases honest". I want to make the truth easier to find in a political environment that has immense groups of people who are arriving at false or true beliefs via questionable reasoning and cherry-picked evidence, and where expertise is censored by glut. This tends to be the kind of environment where the importance and difficulty (for non-experts) of getting the right answer both go up at once. Where once a Google search would have taken you to some obscure blogs and papers by experts discussing the evidence evenhandedly (albeit in frustratingly obscurantist language), politicization causes the same search to give you page after page of mainstream media and bland explanations which gravitate to some narrative or other and which rarely provide strong clues of reliability.

I would describe my personal truthseeking as frustrating. It's hard to tell what's true on a variety of important matters, and even the ones that seemed easy often aren't so easy when you dive into it. Examples:

  • I mentioned before my frustration trying to learn about radiation risks.
  • I've followed the Ukraine invasion closely since it started. It's been extremely hard to find good information, to the point where I use quantity as a substitute for quality because I don't know a better way. This is wastefully time-consuming and if I ever manage to reach a firm conclusion about a subtopic of the war, I have nowhere to publish my findings that any significant number of people would read (I often publish very short summaries or links to what I think is good information on Twitter, knowing that publishing in more detail would be pointless given my lack of audience; I also sometimes comment on Metaculus about war-related topics, but only when my judgement pertains specifically to a forecast that Metaculus happens to ask about.) The general problem I have in this area is a combination of (1) almost nobody citing their sources, (2) the sources themselves often being remarkably barren, e.g. the world-famous Oryx loss data [1, 2] gives nowhere near enough information to tell whether an asserted Russian loss is actually a Russian rather than Ukrainian loss, (3) Russia and Ukraine both have strong information operations that create constant noise, (4) I find pro-Putin sources annoying because of their bloodthirstiness, ultranationalism and authoritarianism, so while some of them give good evidence, I am less likely to discover them, follow them and see that evidence.
  • It appears there's a "97% consensus on global warming", but when you delve deep into it, it's not as clear-cut. Sorry to toot my own horn, but I haven't seen any analysis of the consensus numbers as detailed and evenhanded as the one I wrote at that link (though I have a bias toward the consensus position). That's probably not because no one else has done such an analysis, but because an analysis like that (written by a rando and not quite affirming either of the popular narratives) tends not to surface in Google searches. Plus, my analysis is not updated as new evidence comes in, because I'm no longer following the topic.
  • I saw a rather persuasive full-length YouTube 'documentary' with holocaust-skepticism. I looked for counterarguments, but those were relatively hard to find among the many pages saying something like "they only believe that because they are hateful and antisemitic" (the video didn't display any hint of hate or antisemitism that I could see). When I did find the counterarguments, they were interlaced with strong ad-hominim attacks against the people making the arguments, which struck me as unnecessarily inflammatory rather than persuasive.
  • I was LDS for 27 years before discovering that my religion was false, despite always being open to that possibility. For starters, I didn't realize the extent to which I lived in a bubble or to which I and (especially) other members had poor epistemology. But even outside the bubble it just wasn't very likely that I would stumble upon someone who would point me to the evidence that it was false.

is it only the other people who are not good at collecting and organizing evidence?

No, I don't think I'm especially good at it, and I often wonder if certain other smart people have a better system. I wish I had better tooling and I want this tool for myself as much as anyone else.

Not a good sign

in what way? Are you suggesting that if I built this web site, it would not in fact use algorithms designed in good faith with epistemological principles meant to elevate ideas that are more likely to be true but, rather, it would look for terms like "global warming" and somehow tip the scales toward "humans cause it"?

connotation-heavy language

Please be specific.

A lot of the resources invested into "fighting misinformation" is about censoring nonestablishment voices and that often includes putting out misinformation like "Hunter's laptop was a Russian misinformation campaign" to facilitate political censorship. 

In that enviroment, someone who proposing a new truthseeking project might also be interested into treating a project to strengthen the ruling narrative or they might be interested in actual truthseeking that affirms the ruling narrative when it's right and challenges it if it's wrong. 

In a world where there's so much political pressure it probably takes strong conviction to have a project that does actual truthseeking instead of being coopted for narrative control.

I find him using political examples not suspicious at all. After all, politics is an area where epistemic mistakes can have large to extremely large negative effects. He could have referred to non-political examples, but those tend to be comparatively inconsequential.

Yes, indeed, and that was my point: they are using a political example with a connotation-loaded language as if it was truth, not one possible perspective. Which made me question the OP's ability to evaluate their own commitment to truth-seeking.

For any proposition which you assert it is possible that someone else has another "perspective" and asserts instead, each acting as if it was the truth. So the existence of possible perspective is not specific to politics or truth seeking. Sure, it is possible to be overconfident relative to the evidence you have, but I don't recommend universal extensive hedging for any political examples merely because they are political. If you disagree with his examples, you are surely able to insert similar examples where (what you believe to be) epistemic mistakes have a very large negative impact. The thing with contemporary political mistakes is: They are nearly always controversial, so disagreement is expected, but this is not substantial evidence that political mistakes with large negative effects don't exist. (One could use now uncontroversial historical examples instead, like Lysenkoism, but this could make it sound like such mistakes are a thing from the past, that we are much wiser now.)

I agree that something like an evidence clearinghouse seems like a good and important project. However I am not sure you have gotten the most important part of the problem.

Your proposal seems to focus on having the clearinghouse:

  • Organize information that others have created
  • Organize arguments that others have created

However I think this doesn't really get to the part of the problem that can be effectively addressed.

I think a disagreement often consists of two root causes:

  • It is often because they don't trust each other. For instance lots of scientists are incompetent or politically biased, and especially the scientists who make public statements have typically been selected by politically biased organizations, so you generally should be skeptical of scientists.
  • There is some area that the two sides both want to control. For instance pro-vaccine and anti-vaccine people both want to influence the bodies of anti-vaccine people (pro-vaxxers want anti-vaxxers to get vaccinated and anti-vaxxers don't want to get vaccinated).

I think the primary tasks of an evidence clearinghouse would be something like:

  • Go out of its way to figure out what the underlying conflicts are.
  • Collect new evidence relevant for the conflicts (e.g. is some specific scientific field an exception that is especially trustworthy?)
  • Interpret evidence in the light of people's positions in the conflicts (e.g. what are the potential pros or cons to getting vaccinated in the light of the conflict?)

One coauthor of the recent editorial, "The False Promise of ChatGPT", Jeffrey Watumull, champions an alternative style of AI, "anthronoetic AI", in which the capacity to provide explanations, and not just correct predictions, is fundamental. There is very little information about it online, but you can see a glimpse of the architecture in this video. You might want to talk to him about epistemological methods. 

My job is to build high-quality software that doesn't benefit the world in any way. I'd rather make different software, but for that I need funding.

Ah, story of my life. There are things that pay my bills. There are things that I think the world would benefit from if I made them. I can't find any intersection between these two.

Maybe Kickstarter, if you have a specific idea?

I think another issue that would arise is that if you get "into the weeds," some topics are a lot more straightforward than others (probably delineated by being rooted in mostly social facts or mostly natural science facts, which all behave completely differently).

The Ukraine issue is a pretty bad one, given the history of the region, the Maidan protests, US history of proxy wars, and, and, and. It seems to me far from clear what the simple facts are (other than you have two factions of superpowers, fighting for different things). I have an opinion as to what would be best, and what would be best for people of Ukraine, and what I think sections of Ukraine undisturbed by US and Russian meddling for the past 30 years might vote in referenda. And at least one of those thoughts disagrees with the others. Add to this the last 70 years of US interventions (see Chomsky for pretty good, uncontroversial fact-based arguments that it has all been pretty evil, and by the standards of the Nuremberg Trials one might execute every president since Kennedy).

On the other hand, Global Warming is pretty straightforward (even allowing for seeming complications like Mars temperature rise, or other objections). We can handle the objections in measurable terms of physical reality for a home-run clear answer.

One of OP's examples is an entirely social reality and the other is a matter of physics. Let's face it, in some sense this war is about where we draw squiggly lines and different colored blobs on a map. It's levels removed from something where we can do measurable tests. If you really made all the truth easy to find, bringing someone straight into the weeds of a social problem like a US/NATO intervention, in many cases the answer will not come out clear, no matter how good your tool is. In fact, a reasonable person after reading enough of the Truth might walk away fully disillusioned about all actors involved and ready to join some kind of anarchist movement. Better in some cases to gloss over social realities in broad strokes, burying as much detail as possible, especially if you think the war (whichever one it is!) is just/unjust/worth the money/not worth the money, etc.

I think Western colonialism was really bad, US wars were really bad, the Nazis were really bad, and so on. But from what I see of Russia's position, these are excuses. The true reason for the current war is annexation.

Russia could try to get Ukraine away from NATO, remove ultranationalists, protect Russian speakers and whatever else - purely as a military operation, without annexation. Instead, two days after the Maidan in 2014 and before any hostile action from the new Ukrainian government, Russia initiated annexing Crimea. That move was very popular with the Russian population, it wasn't Putin alone. Similarly in the current war, the stated goals were "demilitarization and denazification", but then Russia annexed several captured territories, which wasn't needed for any of those goals.

In fact I don't know any good reason for these annexations at all. They don't make Russia richer or more secure. It seems the situation is simple and kinda dumb: Putin and a large proportion of Russians simply want to annex these territories, profit be damned. They decided they want it, and now they want it.

Then let's say we broadly agree on the morality of the matter. The question still remains if another US adventure, this time in Europe, is actually going to turn out all that well (as most haven't for the people they claimed to be helping). We also have to wonder if Russia as a failed state will turn out well for Ukraine or Europe, or if this will turn Nuclear if US/NATO refuse to cede any ground, or if the Russia/China alliance will break or not, or for how long the US can even afford and support more wars, etc, etc.

On the other side, do we worry if we're being Neville Chamberlain because we think every aggressor will behave as Hitler in 1938 if we give an inch, so "We gotta do something?" There may even be merit to the sentiment, but "We gotta do something" is one of the most likely ways to screw any situation up. Also, given the US's history of interventions, setting aside morality, just looking at the history of outcomes, the response is questionable. Looking down the road, if this conflict or anything else significantly weakens the US, economically, in domestic politics, or leads to an overextended military, then Ukraine might be lost all the way to the Polish border, not just the Eastern regions.

These are mostly practical considerations that are indeterminate and make the US intervention questionable without even looking at the morality. Given perfect knowledge, you would have a probability and risk management problem on your hands, which often fails to result in a clear convergence of positions. And going back to my original claims, this makes this type of thing very different to Physics and Chemistry and their extensions.

EDIT: Perhaps the most important question comes down to this: Russia clearly screwed up their risk management (as your message alludes to). How can US/NATO do far better with Risk Management? Maybe even better than they've done in all their wars and interventions in recent history?

Russia was trying peaceful and diplomatic options. Very actively. Literally begging to compromise. Before 2014 and before 2022. That did not work. At all.

Deposing the democratically elected government with which Russia was a military ally was an act hostile enough. And Maidan nationalists have already started killing anti-maidan protesters in Crimea and other Russian-speaking regions. I was following those events very closely and was speaking with some of the people living there then.

This seems to miss the point of my comment. What are the reasons for annexation? Not just military action, or even regime change, but specifically annexation? All military goals could be achieved by regime change, keeping Ukraine in current borders, and that would've been much better optics. And all economic reasons disappeared with the end of colonialism. So why annexation? My answer: it's an irrational, ideological desire for that territory. That desire has taken hold of many Russians, including Putin.

Crimea was the only Ukrainian region that was overwhelmingly Russian and pro-Russian. And also the region where a Russian key military base is situated. And at the moment there was (at least, formally) legal way to annex it with the minimal bloodshed. Annexing it has resolved the issue of the military base, and gave the legal status, protection guarantees and rights for the citizens of Crimean republic.

Regime change for entire Ukraine would mean a bloody war, insurgency, and installing a government which the majority of Ukraine population would be against. And massive sanctions against Russia AND Ukraine, for which Russia was not prepared then.

It's true that annexing Crimea would've been rational in a world where +base and +region were the only consequences. (Similar to how the US in the 1840s grabbed Texas and California from Mexico without much problems.) But we do not live in that world. We live in a world where many countries are willing to penalize Russia for annexation and help Ukraine defend. Russia's leadership didn't understand that and still doesn't. As a result, Russia's security and economic situation have both gotten much worse and continue to get worse. That's why I call it irrational.

No such countries. There is USA that is willing to penalize it's geopolitical opponents for being such. There are USA puppets that are willing to penalize those that USA told them to. They were penalizing Russia for arbitrary reasons before and after Crimea. If Russia would not annex Crimea, it would be penalized about the same, but with another cited reasons.

I see a common pattern in your arguments. Ukraine never did large scale repression against Russian speakers - "but they would've done it". Europe didn't start sanctioning Russian resources until several months into the war - "but they would've done it anyway". The US reduced troops and warheads in Europe every year from 1991 to 2021 - "but they would have attacked us". 141 countries vote in the UN to condemn Russian aggression - "but they're all US puppets, just waiting for a chance to harm us".

There's a name for this kind of irrationality: paranoia. Dictators often drum up paranoia to stay in power, which has the side effect of making the country aggressive.

I disagree with the first part, but I'm not sure if this is the right place to discuss the details. We can discuss it in DM if you want.

You are spot on with the second, though. Exploiting fears of real or perceived threats is an extremely effective tool to control people and nations by posing as their protector.

The champion in this regard is the USA, of course. It fuels and exploits Europe's fear of Russia, Japan's fear of China, India's and China's mutual fear, and so on.

Domestically, the USA's elites exploit an extremely wide range of fears. Fear of terrorists, fear of Russia, fear of China, fear of Nazis, fear of people of different parties, races, sexuality, and even fear of people who fear LGBTQ+ or specific races.

The USA has been using the "divide and conquer" strategy liberally for at least a century now. This will likely have catastrophic consequences, as a divided world will have much less chance of surviving the acute risk period.

Putin also exploits fears, such as fears of LGBT "propaganda", Nazis, and the USA. But I don't think his position before 2022 was so shaky that he would have to resort to war to hold it

[+][comment deleted]20

I think that the support for hurting Russia is much greater in Eastern Europe than in USA. (Maybe with the exception of Hungary.) That does not seem to match the "they only want it because they are puppets".

For USA, Russia is some kind of noble ancient enemy. Kicking them while they are down may even feel unsportsmanlike.

For Eastern Europe, it is (for the anti-Russian part of the population) more like: "yeah, kick them while you can, stomp as hard as you can, so that they can never hurt us again". Many families remember relatives who were raped by the Red Army (no, it wasn't "only" Germany), kidnapped for Soviet extermination camps, etc. Ukraine is re-living this history right now, for the others this is more like "horrible stories my grandma told me when she considered me old enough to hear it".

Also, are you aware that Russia was planning to annex Belarus and Moldova next? (Putin actually wrote about his plans with Ukraine and Belarus in 2021.) But even taking the entire Ukraine would already make them my neighbors. I prefer that not to happen.

*

That said, perhaps in larger picture, it is completely irrelevant what the Eastern Europeans want to do, if USA decided otherwise.

That doesn't change the fact that they want it. Definitely not just puppets doing whatever USA tells them. (The example of Hungary actually shows that even the little countries are capable to ignore the American wishes.)

*

Sorry for the mindkilling tone, but I find it annoying when people from internet keep telling me that I have no agency, not even my own thoughts and wishes, I just think what the American overlords want me to think. (Unlike people in Russia or USA, who are allowed to be independent thinkers.)

I would point out that Putin's goal wasn't to make Russia more prosperous, and that what Putin considers good isn't the same as what an average Russian would consider good. Like Putin's other military adventures, the Crimean annexation and heavy military support of Donbas separatists in 2014 probably had a goal like "make the Russian empire great again" (meaning "as big as possible") and from Putin's perspective the operations were a success. Especially as (if my impression is correct) the sanctions were fairly light and Russia could largely work around them.

Partly he was right, since Russia was bigger. But partly his view was a symptom of continuing epistemic errors. For example, given the way the 2022 invasion started, it looks like he didn't notice the crucial fact that his actions caused Ukrainians to turn strongly against Russia after his actions in 2014.

In any case this discussion exemplifies why I want a site entirely centered on evidence. Baturinsky claims that when the Ukrainian parliament voted to remove Yanukovych from office 328 votes to 0 (about 73% of the parliament's 450 members) this was "the democratically elected government" being "deposed". Of course he doesn't mention this vote or the events leading up to it. Who "deposed the democratically elected government"? The U.S.? The tankies say it was the U.S. So who are these people, then? Puppets of the U.S.?

Europe Rights Court Finds Numerous Abuses During Ukraine's Maidan Protests

I shouldn't have to say this on LessWrong, but without evidence it's all just meaningless he-said-she-said. I don't see truthseeking in this thread, just arguing.

I disagree in two ways. First, people are part of physical reality. Reasoning about people and their social relationships is a complex but necessary task.

Second, almost no one goes to first principles and studies climate science themselves in depth. But even if you did that, you'd (1) be learning about it from other people with their interpretations, and (2) you wouldn't be able to study all the subfields in depth. Atmospheric science can tell you about the direct effect of greenhouse gasses, but to predict the total effect quantitatively, and to evaluate alternate hypotheses of global warming, you'll need to learn about glaciology, oceanology, coupled earth-system modeling, the effects of non-GHG aerosols, solar science, how data is aggregated about CO2 emissions, CO2 concentrations, other GHGs, various temperature series, etc.

Finally, if you determine that humans cause warming after all, now you need to start over with ecology, economic modeling etc. in order to determine whether it's actually a big problem. And then, if it is a problem, you'll want to understand how to fix the problem, so now you have to study dozens of potential interventions. And then, finally, once you've done all that and you're the world's leading expert in climate science, now you get frequent death threats and hate mail. A billion people don't believe a word you say, while another billion treat your word like it's the annointed word of God (as long as it conforms to their biases). You have tons of reliable knowledge, but it's nontransferable.

Realistically we don't do any of this. Instead we mostly try to figure out the social reality: Which sources seem to be more truth-seeking and which seem to be more tribal? Who are the cranks, who are the real experts, and who can I trust to summarize information? For instance, your assertion that Noam Chomsky provides "good, uncontroversial fact-based arguments" is a social assertion that I disagree with.

I think going into the weeds is a very good way of figuring out the social truth that you actually need to figure out the truth about the broader topic to which the weeds are related. For instance, if the weeds are telling you that pundit X is clearly telling a lie Y, and if everybody who believes Z also believes X and Y, you've learned not to trust X, X's followers, Y, and Z, and all of this is good... except that for some people, the weeds they end up looking at are actually astroturf or tribally-engineered plants very different from the weeds they thought they were looking at, and that's the sort of problem I would like to solve. I want a place where a tribally-engineered weed is reliably marked as such.

So I think that in many ways studying Ukraine is just the same as studying climate science, except that the "fog of war" and the lack of rigorous sources for war information make it hard to figure some things out.

Okay, I think I understand what you mean that, since it's impossible to fully comprehend climate change from first principles, it ends up being a political and social discussion (and anyway, that's empirically the case). Nonetheless, I think there's something categorically in the physical sciences than the the more social facts.

I think perfect knowledge of climate science would tend towards convergence, whereas at least some Social Issues (Ukraine being a possible example) just don't work that way. The Chomsky example is Germane: prior to 92, his work on politics was all heavily cited and based on primary sources, and pretty much as solid academically as you could ask for (See for example "The Chomsky Reader") and we already disagree on this.

With regards Ukraine, I think intelligent people with lots of information might end up diverging even more as to their opinions on how much violence each side should be willing to threaten, use, and display in an argument about squiggly lines on map blobs, given more information. Henry Kissinger ended up not even agreeing with himself from week to week, and he's probably as qualified an expert on this matter as any of us. I think it's fair to suggest that no number of facts regarding Ukraine are going to bring the kind of convergence you would see if we could upload the sum of climate science into each of our human minds.

Even if I am wrong in the Ukraine case, do you think there are at least some social realities that if you magically downloaded the full spectrum of factual information into everyone's mind, people's opinions might still diverge? Doesn't that differ from a hard science where they would tend to converge if you understood all the facts? Doesn't this indicate a major difference of categories?

Another way of looking at it: Social realities are not nearly as deterministic on factual truth as accurate conclusions in the hard sciences are. They are always vastly more stochastic. Even looking at the fields, the correlation coefficients and R2 for whole models in Sociology, at it's absolute best, are nothing at all compared to the determinism you can get in Physics and Chemistry.

I think that the people who are truthseeking well do converge in their views on Ukraine. Around me I see tribal loyalty to Kremlin propaganda, to Ukrainian/NAFO propaganda, to anti-Americanism (enter Noam Chomsky) and/or to America First. Ironically, anti-American and America First people end up believing similar things, because they both give credence to Kremlin propaganda that fits into their respective worldviews. But I certainly have a sense of convergence among high-rung observers who follow the war closely and have "average" (or better yet scope-sensitive/linear) morality. Convergence seems limited by the factors I mentioned though (fog of war, poor rigor in primary/secondary sources). P.S. A key thing about Chomsky is that his focus is all about America, and to understand the situation properly you must understand Putin and Russia (and to a lesser extent Ukraine). I recommend Vexler's video on Chomsky/Ukraine as well as this video from before the invasion. I also follow several other analysts and English-speaking Russians (plus Russian Dissent translated from Russian) who give a picture of Russia/Putin generally compatible with Vexler's.

do you think there are at least some social realities that if you magically downloaded the full spectrum of factual information into everyone's mind, people's opinions might still diverge

Yes, except I'd use the word "disagree" rather than "diverge". People have different moral intuitions, different brain structures / ways of processing info, and different initial priors that would cause disagreements. Some people want genocide, for example, and while knowing all the facts may decrease (or in many cases eliminate) that desire, it seems like there's a fundamental difference in moral intuition between people that sometimes like genocide and those of us who never do, and I don't see how knowing all the facts accurately would resolve that.

What you are actually making is something like a "lesser of two evils" argument or some bet on tradeoffs paying off that one party may buy and another may not. Having explored the reasoning this far, I would suggest this is one class of circumstances where even if you beamed all the facts into two people's minds, who both had "Average" morality, this is one of the situations where there would still tend to be disagreement. This definitely doesn't hinge on someone wanting something bad, like genocide, for the disagreement. People could both want the same outcomes and diverge in their conclusions with the facts beamed into their minds in this class of situations (which, to my original argument, differs tremendously from physics).

I hadn't seen old man Chomsky talk about Ukraine prior to your video above. I think though, if you look at his best work, you might be able to softly mollify the impact, but it's not like he's pulling his ideas about, say, every single US action in South America and the Middle East being very bad for the people they claimed to help, out of some highly skewed view. Those border on fairly obvious, at any rate, and your video's recasting him as a "voice of moral outrage" hinges on his off-the cuff interviews, not his heavily cited work (as I mentioned The Chomsky Reader, which is a different man than the one in the video)

Even setting him aside as a reference, looking at the recent history of US war, at the most generous, considering Russian badness and US badness, any "moral high-ground" argument for the US being good in this case will boil down to a lesser-of-two-evils assessment. Also looking at US history, you lose some of the "this is just an annexation" because US proxy war since 2014 would fit the pattern of pretty much everything the USA has done both recently and for the past 100 years.

Your point about also looking at Putin/Russia is fine, and it should be considered as well as practical solutions to the matter. I think we all would call Putin a criminal, this isn't a question at hand. The question is if another US adventure, this time in Europe, is actually going to turn out all that well, or if Russia as a failed state will turn out well for Ukraine or Europe, or if this will turn Nuclear if you refuse to cede any ground, or if the Russia/China alliance will break or not, or for how long the US can even afford and support more wars, etc, etc. These are mostly practical matters that are indeterminate and make the intervention questionable. In practical senses, they present different good/bad tradeoffs and better/worse odds bets on outcomes to different parties that amount to weighing different "lesser evil" projections in the outcome. They don't hinge on our moral intuitions differing at all.

(And again, all this differs in category and the way it behaves from Physics)

every single US action in South America and the Middle East being very bad for the people they claimed to help

Maybe if we also included WW2 Germany and Japan to this reference group, the outcomes would be more of a mixed bag.

Then again, the argument might be that American foreign policy became bad after WW2.

I don't know what you are referring to in the first sentence, but the idea that this is a war between US and Russia (not Russia and Ukraine) is Russian propaganda (which doesn't perfectly guarantee it's BS, but it is BS.)

In any case, this discussion exemplifies my frustration with a world in which a site like I propose does not exist. I have my sources, you have yours, they disagree on the most basic facts, and nobody is citing evidence that would prove the case one way or another. Even if we did go deep into all the evidence, it would be sitting here in a place where no one searching for information about the Ukraine war will ever see it. I find it utterly ridiculous that most people are satisfied with this status quo.

Well, someone was working on a similar-ish project recently, @Bruce Lewis with HowTruthful. Maybe you two can combine your ideas or settle on an amalgamation together. 

If possible, please let us know how it goes a couple months from now!

The best path forward might be for @DPiepgrass to make a prototype or mockup, borrowing ideas from HowTruthful and then discussing from there.

Another thing: not only is my idea unpopular, it's obvious from vote counts that some people are actively opposed to it. I haven't seen any computational epistemology (or evidence repository) project that is popular on LessWrong, either. Have you seen any?

If in fact this sort of thing tends not to interest LessWrongers, I find that deeply disturbing, especially in light of the stereotypes I've seen of "rationalists" on Twitter and EA forum. How right are the stereotypes? I'm starting to wonder.

Ah, this is nice. I was avoiding looking at my notifications for the last 3 months for fear of a reply by Christian Kl, but actually it turned out to be you two :D

I cannot work on this project right now because busy I'm earning money to be able to afford to fund it (as I don't see how to make money on it). I have a family of 4+, so this is far from trivial. I've been earning for a couple of years, and I will need a couple more years more. I will leave my thoughts on HowTruthful on one of your posts on it.

Everybody have their own criteria of truth.

So, there should be a wide choice of the algorithms and algorithm tweaks that would analyze the relevant data, filter it and process it in a specific way that would satisfy the specific needs of the specific person.

Some people seem to have criteria for truth that produce self-sealing beliefs.

But yes, I think it would be interesting and valuable to be able to switch out algorithms for different ones to see how that affects the estimated likelihood that the various propositions and analyses are likely to be correct. If an algorithm is self-consistent, not based on circular reasoning and not easily manipulable, I expect it to provide useful information.

Also, such alternate algorithms could potentially serve as "bias-goggles" that help people to understand others' points of view. For example, if someone develops a relatively simple, legible algorithm that retrodicts most political views on a certain part of the political spectrum (by re-ranking all analyses in the evidence database), then the algorithm is probably informative about how people in that area of the spectrum form their beliefs.

Regarding the shopping example, I find that B2B websites like Fisher Scientific and McMaster-Carr have good search and filter options. Pcpartpicker.com is also a good example.

Politics is the Mind-Killer there's no good reason to lead with examples that are this political. 

I don't buy that truthful things should be, in general, difficult to distinguish from untruthful things. I'm not even sure of what that would mean, exactly, for truth-seeking to just be "difficult." 

We could ask whether we would expect that true claims would "sound better" to the one reading / hearing them than false claims. This would have important implications: For example, if they do sound better, then "persuasion" isn't something anyone need to worry about, unless they were intentionally trying to persuade someone of something that was both false and sounded bad, which would be the case by assumption, here.

The idea that truth-seeking is inherently difficult is an idea that sounds bad. Thus, for me to believe it would require me to believe that bad-sounding things could be true and good-sounding things could be false. How often would this mismatch happen? There is no way a priori to tell how often we would expect this, and that in itself is a bad-sounding thing. 

An individual who investigates stuff, but isn't popular, has nowhere they can put their findings and expect others to find them. Sure, you can put up a blog or a Twitter thread, but that hardly means anyone will look at it.

I even more don't buy the idea that false things monetize better than true things. But this is a complaint I sometimes hear, and I can't help but sneer at it a bit. It's one thing to think that false things and true things compete on an even playing-field, but it's a wholly different thing to think that people are inherently hardwired to find false things more palatable and therefore spend more time looking for it / paying for it. 

It sounds very similar to the arguments for fighting misinformation on social media platforms: Mainly, that it tends to spread more easily than "true but boring / unpleasant" things. During COVID-19, for example, the people that thought we ought to stem the spread of misinformation also typically believed that COVID-19 was more dangerous than the opposite group. 

This seems like a very important crux, then, at least: The dichotomy between good-seeming / bad-seeming and true / false. I agree that we should get to the bottom of it.

I don't understand why you say "should be difficult to distinguish" rather than "are difficult", why you seem to think finding the truth isn't difficult, or what you think truthseeking consists of.

For two paragraphs you reason about "what if true claims sound better". But true claims don't inherently "sound better", so I don't understand why you're talking about it. How good a claim "sounds" varies from person to person, which implies "true claims sound better" is a false proposition (assuming a fact can be true or false independently of two people, one of whom thinks the claim "sounds good" and the other thinks it "sounds bad", as is often the case). Moreover, the same facts can be phrased in a way that "sounds good" or "sounds bad".

I didn't say "false things monetize better than true things". I would say that technically correct and broadly fair debunkings (or technically correct and broadly fair publications devoted to countering false narratives) don't monetize well, certainly not to the tune of millions of dollars annually for a single pundit. Provide counterexamples if you have them.

people are inherently hardwired to find false things more palatable

I didn't say or believe this either. For such a thing to even be possible, people would have to easily distinguish true and false (which I deny) to determine whether a proposition is "palatable".

The dichotomy between good-seeming / bad-seeming and true / false.

I don't know what you mean. Consider rephrasing this in the form of a sentence.

I don't understand why you say "should be difficult to distinguish" rather than "are difficult", why you seem to think finding the truth isn't difficult, or what you think truthseeking consists of.

Because it feels like it's a choice whether or not I want to consider truth-seeking to be difficult. You are trying to convince me that I should consider it difficult, so that means I have the option to or not. If it simply is difficult, you don't need to try and convince me of that, it would be obvious on it's own. 

In addition to that, "should be" means that I think something ought to be a certain way. It certainly would be better if truth-seeking weren't difficult, wouldn't you agree?

I didn't say "false things monetize better than true things". I would say that technically correct and broadly fair debunkings (or technically correct and broadly fair publications devoted to countering false narratives) don't monetize well, certainly not to the tune of millions of dollars annually for a single pundit. Provide counterexamples if you have them.

So you're not saying that false things monetize better than true things, you're saying that things which correctly state that other things are false monetize worse than that the things that they claim are false. I don't think I misunderstood you here, but I may have interpreted your meaning more broadly than it was intended.

I would think that how well something monetizes depends on how much people want to hear it. So yes, that would mean that it depends on how good something sounds. Our disagreement is on whether or not how good something sounds has any relation whatsoever to how true it is. 

But true claims don't inherently "sound better"

To be clear, I'm saying that they do, and that this means that truth-seeking isn't that difficult, and it is counterproductive to believe that it is difficult.

I'm saying that [true claims sound better]

The proof I gave that this is false was convincing to me, and you didn't rebut it. Here are some examples from my father:

ALL the test animals [in mRNA vaccine trials] died during Covid development.

The FDA [are] not following their own procedures.

There is not a single study that shows [masks] are of benefit.

[Studies] say the jab will result in sterility.

Vaccination usually results in the development of variants.

He loves to say things like this (he can go on and on saying such things; I assume he has it all memorized) and he believes they are true. They must sound good to him. They don't sound good to me (especially in context). How does this not contradict your view?

it feels like it's a choice whether or not I want to consider truth-seeking to be difficult.

Agreed, it is.

We should be able to mutually agree on what sounds better. For example, "vaccines work" probably sounds better to us both. People say things that don't sound good all the time, just because they say it doesn't mean they also think it sounds good.

Things like "we should be able to figure out the truth as it is relevant to our situation with the capabilities we have" have to sound good to everyone, I would think. That means there's basis for alignment, here.