RHollerith

Richard Hollerith. 15 miles north of San Francisco. hruvulum@gmail.com

My probability that AI research will end all human life is .92.  It went up drastically when Eliezer started going public with his pessimistic assessment in April 2022. Till then my confidence in MIRI (and knowing that MIRI has enough funding to employ many researchers) was keeping my probability down to about .4. (I am glad I found out about Eliezer's assessment.)

Currently I am willing to meet with almost anyone on the subject of AI extinction risk.

Last updated 26 Sep 2023.

Wiki Contributions

Comments

The problem with routinely skipping dinner is getting enough protein. No matter how much protein you eat in one sitting, your body can use at most 40 or 45 grams. (The rest is converted to fuel -- glucose, fructose or fatty acids, I don't know which.) On a low protein diet, it is difficult to maintain anything near an optimal amount of muscle mass (even if you train regularly with weights) -- and the older you get, the harder it gets. One thing muscle mass is good for is smoothing out spikes in blood glucose: the muscles remove glucose from the blood and store it. Muscle also protects you from injury. Also men report that people (men and women) seem to like them better when they have more muscles (within reason).

But yeah, if you don't have to worry about maintaining muscle mass, routinely skipping meals ("time-restricted eating") is a very easy way to maintain a healthy BMI.

Just because it was not among the organizing principles of any of the literate societies before Jesus does not mean it is not part of the human mental architecture.

Answer by RHollerithApr 14, 20245-4

There is very little hope here IMO. The basic problem is the fact that people have a false confidence in measures to render a powerful AI safe (or in explanations as to why the AI will turn out safe even if no one intervenes to make it safe). Although the warning shot might convince some people to switch from one source of false hope to a different source, it will not materially increase the number of people strongly committed to stopping AI research, all of which have somehow come to doubt all of the many dozens of schemes published so far for rendered powerful AI safe (and the many explanations for why the AI will turn out safe even if we don't have a good plan for ensuring its safety).

I wouldn't be surprised to learn that Sean Carroll already did that!

Is it impossible that someday someone will derive the Born rule from Schrodinger's equation (plus perhaps some of the "background assumptions" relied on by the MWI)? 

Being uncertain of the implications of the hypothesis has no bearing on the Kolmogorv complexity of a hypothesis.

Fire temperature can be computed from the fire's color.

I'm tired of the worthless AI-generated art that writers here put in their posts and comments. Some might not be able to relate, but the way my brain works, I have to exert focus for a few seconds to suppress the effects of having seen the image before I can continue to engage with the writer's words. It is quite mentally effortful.

As a deep-learning novice, I found the post charming and informative.

The statement does not mention existential risk, but rather "the risk of extinction from AI".

Load More