RHollerith

Richard Hollerith. 15 miles north of San Francisco. hruvulum@gmail.com

My probability that AI research will end all human life is .92.  It went up drastically when Eliezer started going public with his pessimistic assessment in April 2022. Till then my confidence in MIRI (and knowing that MIRI has enough funding to employ many researchers) was keeping my probability down to about .4. (I am glad I found out about Eliezer's assessment.)

Currently I am willing to meet with almost anyone on the subject of AI extinction risk.

Last updated 26 Sep 2023.

Wiki Contributions

Comments

These next changes implemented in the US, Europe and East Asia would probably buy us many decades:

Close all the AI labs and return their assets to their shareholders;

Require all "experts" (e.g., researchers, instructors) in AI to leave their jobs; give them money to compensate them for their temporary loss of earnings power;

Make it illegal to communicate technical knowledge about machine learning or AI; this includes publishing papers, engaging in informal conversations, tutoring, talking about it in a classroom; even distributing already-published titles on the subject gets banned.

Of course it is impractical to completely stop these activities (especially the distribution of already-published titles), but we do not have to completely stop them; we need only sufficiently reduce the rate at which the AI community worldwide produces algorithmic improvements. Here we are helped by the fact that figuring out how to create an AI capable of killing us all is probably still a very hard research problem.

What is most dangerous about the current situation is the tens of thousands of researchers world-wide with tens of billions in funding who feel perfectly free to communicate and collaborate with each other and who expect that they will be praised and rewarded for increasing our society's ability to create powerful AIs. If instead they come to expect more criticism than praise and more punishment than reward, most of them will stop -- and more importantly almost no young person is going to put in the years of hard work needed to become an AI researcher.

I know how awful this sounds to many of the people reading this, including the person I am replying to, but you did ask, "Is there some other policy target which would somehow buy a lot more time?"

If we manage to avoid extinction for a few centuries, cognitive capacities among the most capable people are likely to increase substantially merely through natural selection. Because our storehouse of potent knowledge is now so large and because of other factors (e.g., increased specialization in the labor market), it is easier than ever for people with high cognitive capacity to earn above-average incomes and to avoid or obtain cures for illnesses of themselves and their children. (The level of health care a person can obtain by consulting doctors and being willing to follow their recommendations will always lag behind the level that can be obtained by doing that and doing one's best to create and refine a mental model of the illness.)

Yes, there is a process that has been causing the more highly-educated and the more highly-paid to have fewer children than average, but natural selection will probably cancel out the effect of that process over the next few centuries: I can't think of any human traits subject to more selection pressure than the traits that make it more likely the individual will choose to have children even when effective contraception is cheap and available. Also, declining birth rates are causing big problems for the economies and military readiness of many countries, and governments might in the future respond to those problems by banning contraception.

Answer by RHollerith60

Mysterious chronic illnesses tend to be hard to fix. If controlling human physiology were as easy as controlling software-based systems, some people would be able to stay alive indefinitely.

COI == conflict of interest.

RHollerith-2-3

Instead of the compute, can we have some extra time instead, i.e., a pause in capabilities research?

Picture a dynamic logarithmic scale of discomfort stacking with a ‘hard cap’ where every new instance contributes less and less to the total to the point of flatlining on a graph.

Reality is structured such that there tend to be an endless number of (typically very complicated) ways of increasing a probability by a tiny amount. The problem with putting a hard cap on the desirability of some need or want is that the agent will completely disregard that need or want to affect the probability of a need or want that is not capped (e.g., the need to avoid people's being tortured) even if that effect is extremely small.

The problem with routinely skipping dinner is getting enough protein. No matter how much protein you eat in one sitting, your body can use at most 40 or 45 grams. (The rest is converted to fuel -- glucose, fructose or fatty acids, I don't know which.) On a low protein diet, it is difficult to maintain anything near an optimal amount of muscle mass (even if you train regularly with weights) -- and the older you get, the harder it gets. One thing muscle mass is good for is smoothing out spikes in blood glucose: the muscles remove glucose from the blood and store it. Muscle also protects you from injury. Also men report that people (men and women) seem to like them better when they have more muscles (within reason).

But yeah, if you don't have to worry about maintaining muscle mass, routinely skipping meals ("time-restricted eating") is a very easy way to maintain a healthy BMI.

Just because it was not among the organizing principles of any of the literate societies before Jesus does not mean it is not part of the human mental architecture.

Answer by RHollerith8-3

There is very little hope here IMO. The basic problem is the fact that people have a false confidence in measures to render a powerful AI safe (or in explanations as to why the AI will turn out safe even if no one intervenes to make it safe). Although the warning shot might convince some people to switch from one source of false hope to a different source, it will not materially increase the number of people strongly committed to stopping AI research, all of which have somehow come to doubt all of the many dozens of schemes published so far for rendered powerful AI safe (and the many explanations for why the AI will turn out safe even if we don't have a good plan for ensuring its safety).

I wouldn't be surprised to learn that Sean Carroll already did that!

Load More