RHollerith

Richard Hollerith. 15 miles north of San Francisco. hruvulum@gmail.com

My probability that AI research will end all human life is .92.  It went up drastically when Eliezer started going public with his pessimistic assessment in April 2022. Till then my confidence in MIRI (and knowing that MIRI has enough funding to employ many researchers) was keeping my probability down to about .4. (I am glad I found out about Eliezer's assessment.)

Currently I am willing to meet with almost anyone on the subject of AI extinction risk.

Last updated 26 Sep 2023.

Wikitag Contributions

Comments

Sorted by

Grass is mostly (water and) carbs, just not carbs a person can digest and burn with any efficiency.

Good point. Change my final sentence to, "A warning shot is made by the entity capable of imposing damaging consequences on you -- to alert you and to give you a way to avoid the most damaging of the consequences at its disposal."

Many believe that one hope for our future is that the AI labs will makes some mistake that will kill many people, but not all of us, resulting in the survivors finally realizing how dangerous AI is. I wish people would refer to that as a "near miss", not a "warning shot". A warning shot is when the danger (originally a warship) actually cares about you but cares about its mission more, with the result that it complicates its plans and policies to try to keep you alive.

I am surprised by that because I've been avoiding learning about LLMs (including making any use of LLMs) till about a month ago, so it didn't occur to me that implementing this might have been as easy as adding to the system prompt instructions for what kinds of information to put in the contextual memory file.

This contextual memory file is edited by the user, never the AI?

In 2015, I didn't write much about AI on Hacker News because even just explaining why it is dangerous will tend to spark enthusiasm for it in some people (people attracted to power, who notice that since it is dangerous, it must be powerful). These days, I don't let that consideration stop me from write about AI.

Good points, which in part explains why I think it is very very unlikely that AI research can be driven underground (in the US or worldwide). I was speaking to the desirability of driving it underground, not its feasibility.

pausing means moving AI underground, and from what I can tell that would make it much harder to do safety research

I would be overjoyed if all AI research were driven underground! The main source of danger is the fact that there are thousands of AI researchers, most of whom are free to communicate and collaborate with each other. Lone researchers or small underground cells of researcher who cannot publish their results would be vastly less dangerous than the current AI research community even if there are many lone researchers and many small underground teams. And if we could make it illegal for these underground teams to generate revenue by selling AI-based services or to raise money from investors, that would bring me great joy, too.

Research can be modeled as a series of breakthroughs such that it is basically impossible to make breakthrough N before knowing about breakthrough N-1. If the researcher who makes breakthrough N-1 is unable to communicate it to researchers outside of his own small underground cell of researchers, then only that small underground cell or team has a chance at discovering breakthrough N, and research would proceed much more slowly than it does under current conditions.

The biggest hope for our survival is the quite likely and realistic hope that many thousands of person-years of intellectual effort that can only be done by the most talented among us remain to be done before anyone can create an AI that could extinct us. We should be making the working conditions of the (misguided) people doing that intellectual labor as difficult and unproductive as possible. We should restrict or cut off the labs' access to revenue, to investment, to "compute" (GPUs), to electricity and to employees. Employees with the skills and knowledge to advance the field are a particularly important resource for the labs; consequently, we should reduce or restrict their number by making it as hard as possible (illegal preferably) to learn, publish, teach or lecture about deep learning.

Also, in my assessment, we are not getting much by having access to the AI researchers: we're not persuading them to change how they operate and the information we are getting from them is of little help IMHO in the attempt to figure out alignment (in the original sense of the word where the AI stays aligned even if it becomes superhumanly capable).

The most promising alignment research IMHO is the kind that mostly ignores the deep-learning approach (which is the sole focus as far as I know of all the major labs) and inquires deeply into which approach to creating a superhumanly-capable AI would be particularly easy to align. That was the approach taken by MIRI before it concluded in 2022 that its resources were better spent trying to slow down the AI juggernaut through public persuasion.

Deep learning is a technology created by people who did not care about alignment or wrongly assumed alignment would be easy. There is a reason why MIRI mostly ignored deep learning when most AI researchers started to focus on it in 2006. It is probably a better route to aligned transformative AI to search for another, much-easier-to-align technology (that can eventually be made competitive in capabilities with deep learning) than to search for a method to align AIs created with deep-learning technology. (To be clear, I doubt that either approach will bear fruit in time unless the AI juggernaut can be slowed down considerably.) And of course if they will be mostly ignoring deep learning, there's little alignment researchers can learn from the leading labs.

Impressive performance by the chatbot.

Maybe "motto" is the wrong word. I meant words / concepts to use in a comment or in a conversation.

"Those companies that created ChatGPT, etc? If allowed to continue operating without strict regulation, they will cause an intelligence explosion."

Load More