Richard Hollerith. 15 miles north of San Francisco. hruvulum@gmail.com
My probability that AI research will end all human life is .92. It went up drastically when Eliezer started going public with his pessimistic assessment in April 2022. Till then my confidence in MIRI (and knowing that MIRI has enough funding to employ many researchers) was keeping my probability down to about .4. (I am glad I found out about Eliezer's assessment.)
Currently I am willing to meet with almost anyone on the subject of AI extinction risk.
Last updated 26 Sep 2023.
Maybe I failed to write something that reasonable people could parse.
"The prediction market"? I am confused what you mean by that.
I mean a technology and its implementations similar to how "the telephone" refers to a technology.
I should have written, "I wouldn't be surprised if prediction markets start growing much faster than they have been growing over the last 3 decades or so", to avoid implying that they are not current important.
$20B in what? Annual revenue (i.e., mostly fees added to transactions) of the companies that run the markets? Assets under management? Amount deposited by traders at a particular moment in time? Valuation of the companies that run the markets?
For decades, people have been saying that the prediction market has the potential to become economically important, yet it remains unimportant. I would not be surprised if it becomes important over the next 4 years thanks to broadly-available AI technology.
Let's define "economically important" as a state of affairs in which there continues to be at least $50 billion riding on predictions at every instant in time.
First of all, AI tech might make prediction markets better by helping with market-making and arbitrage. Second, a sufficiently robust prediction market might turn out to be the most cost-effective way for an owner of an AI service like ChatGPT to make certain kinds of improvements to the AI service with the result that owners of AI services become a major source of investment in and of revenue for the prediction markets.
Yes, even an AI that has not undergone any recursive self-improvement might be a threat to human survival. I remember Eliezer saying this (a few years ago) but please don't ask me to find where he says it.
Parenthetically, I do not yet know of anyone in the "never build ASI" camp and would be interested in reading or listening to such a person.
I'm basically not worried about this.
Google Search has proven pretty OK at preventing spam and content farms from showing up in search results at rates that would make Google Search useless despite the fact the spammers and SEO actors spend billions of dollars per year trying to influence the search results (in ways that are good for the SEO actor or his client, but bad for users of Google Search).
Moreover, even though neither OpenAI, Anthropic nor DeepSeek had access to the expertise, software or data Google was using to filter this bad content from search result, this bad content (spam, content farms and other efforts by SEO actors) has very little influence (as far as I can tell) on the answers given by the current crop of LLM-based services from these companies.
A creator of an LLM is motivated to make the LLM as good as possible at truthseeking (because truthseeking correlates to usefulness to users). If it hasn't happened already, then in at most a couple of years LLM's will have become good enough at truthseeking to filter out the kind of spam you are worried about even though the creator of the LLM never directed large quantities of human attention and human skill specifically at the problem like Google has had to do over the last 25 years against the efforts of SEO actors. The labs are also motivated to make the answers provided by LLM services as relevant as possible to the user, which also has the effect of filtering out content produced by the psychotic people.
Also useful for filtering out the smoke from forest fires although it gets tedious to wear one all day for days in a row, so in addition to a mask it is nice to own an air purifier with a HEPA rating or a MERV rating for when you are indoors.
This soldier spent 2 years fighting for Ukraine, including 6 months recently as an operator of FPV drones, and he is also skeptical that drones will revolutionize military affairs during the next few years. I don't recall anything about his arguments, but my recollection is he does provide some argumentation in this interview.