Richard Hollerith. 15 miles north of San Francisco. hruvulum@gmail.com
My probability that AI research will end all human life is .92. It went up drastically when Eliezer started going public with his pessimistic assessment in April 2022. Till then my confidence in MIRI (and knowing that MIRI has enough funding to employ many researchers) was keeping my probability down to about .4. (I am glad I found out about Eliezer's assessment.)
Currently I am willing to meet with almost anyone on the subject of AI extinction risk.
Last updated 26 Sep 2023.
This contextual memory file is edited by the user, never the AI?
In 2015, I didn't write much about AI on Hacker News because even just explaining why it is dangerous will tend to spark enthusiasm for it in some people (people attracted to power, who notice that since it is dangerous, it must be powerful). These days, I don't let that consideration stop me from write about AI.
Good points, which in part explains why I think it is very very unlikely that AI research can be driven underground (in the US or worldwide). I was speaking to the desirability of driving it underground, not its feasibility.
pausing means moving AI underground, and from what I can tell that would make it much harder to do safety research
I would be overjoyed if all AI research were driven underground! The main source of danger is the fact that there are thousands of AI researchers, most of whom are free to communicate and collaborate with each other. Lone researchers or small underground cells of researcher who cannot publish their results would be vastly less dangerous than the current AI research community even if there are many lone researchers and many small underground teams. And if we could make it illegal for these underground teams to generate revenue by selling AI-based services or to raise money from investors, that would bring me great joy, too.
Research can be modeled as a series of breakthroughs such that it is basically impossible to make breakthrough N before knowing about breakthrough N-1. If the researcher who makes breakthrough N-1 is unable to communicate it to researchers outside of his own small underground cell of researchers, then only that small underground cell or team has a chance at discovering breakthrough N, and research would proceed much more slowly than it does under current conditions.
The biggest hope for our survival is the quite likely and realistic hope that many thousands of person-years of intellectual effort that can only be done by the most talented among us remain to be done before anyone can create an AI that could extinct us. We should be making the working conditions of the (misguided) people doing that intellectual labor as difficult and unproductive as possible. We should restrict or cut off the labs' access to revenue, to investment, to "compute" (GPUs), to electricity and to employees. Employees with the skills and knowledge to advance the field are a particularly important resource for the labs; consequently, we should reduce or restrict their number by making it as hard as possible (illegal preferably) to learn, publish, teach or lecture about deep learning.
Also, in my assessment, we are not getting much by having access to the AI researchers: we're not persuading them to change how they operate and the information we are getting from them is of little help IMHO in the attempt to figure out alignment (in the original sense of the word where the AI stays aligned even if it becomes superhumanly capable).
The most promising alignment research IMHO is the kind that mostly ignores the deep-learning approach (which is the sole focus as far as I know of all the major labs) and inquires deeply into which approach to creating a superhumanly-capable AI would be particularly easy to align. That was the approach taken by MIRI before it concluded in 2022 that its resources were better spent trying to slow down the AI juggernaut through public persuasion.
Deep learning is a technology created by people who did not care about alignment or wrongly assumed alignment would be easy. There is a reason why MIRI mostly ignored deep learning when most AI researchers started to focus on it in 2006. It is probably a better route to aligned transformative AI to search for another, much-easier-to-align technology (that can eventually be made competitive in capabilities with deep learning) than to search for a method to align AIs created with deep-learning technology. (To be clear, I doubt that either approach will bear fruit in time unless the AI juggernaut can be slowed down considerably.) And of course if they will be mostly ignoring deep learning, there's little alignment researchers can learn from the leading labs.
Impressive performance by the chatbot.
Maybe "motto" is the wrong word. I meant words / concepts to use in a comment or in a conversation.
"Those companies that created ChatGPT, etc? If allowed to continue operating without strict regulation, they will cause an intelligence explosion."
All 3 of the other replies to your question overlook the crispest consideration: namely, it is not possible to ensure the proper functioning of even something as simple as a circuit for division (such as we might find inside a CPU) through testing alone: there are too many possible inputs (too many pairs of possible 64-bit divisors and dividends) to test in one lifetime even if you make a million perfect copies of the circuit and test them in parallel.
Let us consider very briefly what else besides testing an engineer might do to ensure (or "verify" as the engineer would probably say) the proper operation of a circuit for dividing. The circuit is composed of 64 sub-circuits, each responsible for producing one bit of the output (i.e., the quotient to be calculated), and an engineer will know enough about arithmetic to know that the sub-circuit for calculating bit N should bear a close resemblance to the one for bit N+1: it might not be exactly identical, but any differences will be simple enough to be understood by a digital-design engineer -- usually: in 1994, a bug was found in the floating-point division circuit of the Intel Pentium CPU, precipitating a product recall that cost Intel about $475 million. After that, Intel switched to a more reliable, but much more ponderous technique called "formal verification" of its CPUs.
My point is that the question you are asking is sort of a low-stakes question (if you don't mind my saying) because there is a sharp limit to how useful testing can be: testing can reveal that the designers need to go back to the drawing board, but human designers can't go back to the drawing board billions of times (because there is not enough time because human designers are not that fast) so most of the many tens or hundreds of bits of human-applied optimization pressure that will be required for any successful alignment effort will need to come from processes other than testing. Discussion of these other processes is more pressing than any discussion of testing.
Eliezer's "Einstein's Arrogance is directly applicable here although I see that that post uses "bits of evidence" and "bits of entanglement" instead of "bits of optimization pressure".
Another important consideration is that there is probably no safe way to run most of the tests we would want to run on an AI much more powerful than we are.
Let me reassure you that there’s more than enough protein available in plant-based foods. For example, here’s how much grams of protein there is in 100 gram of meat
That is misleading because most foods are mostly water, included the (cooked) meats you list, but the first 4 of the plant foods you list have had their water artificially removed: soy protein isolate; egg white, dried; spirulina algae, dried; baker’s yeast.
Moreover, the human gut digests and absorbs more of animal protein than of plant protein. Part of the reason for this is the plant protein includes more fragments that are impervious to digestive enzymes in the human gut and more fragments (e.g., lectins) that interfere with human physiology.
Moreover, there are many people who can and do eat 1 or even 2 lb of cooked meat every day without obvious short-term consequences whereas most people who would try to eat 1 lb of spirulina (dry weight) or baker's yeast (dry weight) in a day would probably get acute distress of the gut before the end of the day even if the spirulina or yeast was mixed with plenty of other food containing plenty of water, fiber, etc. Or at least that would be my guess (having eaten small amounts of both things): has anyone made the experiment?
I am surprised by that because I've been avoiding learning about LLMs (including making any use of LLMs) till about a month ago, so it didn't occur to me that implementing this might have been as easy as adding to the system prompt instructions for what kinds of information to put in the contextual memory file.