Richard Hollerith. 15 miles north of San Francisco. hruvulum@gmail.com
My probability that AI research will end all human life is .92. It went up drastically when Eliezer started going public with his pessimistic assessment in April 2022. Till then my confidence in MIRI (and knowing that MIRI has enough funding to employ many researchers) was keeping my probability down to about .4. (I am glad I found out about Eliezer's assessment.)
Currently I am willing to meet with almost anyone on the subject of AI extinction risk.
Last updated 26 Sep 2023.
Almost all the reusable respirators have the valve.
Sure, but without a valve, you are breathing back in exhaled CO2.
Even some of the single-use masks have exhale valves.
Grass is mostly (water and) carbs, just not carbs a person can digest and burn with any efficiency.
Good point. Change my final sentence to, "A warning shot is made by the entity capable of imposing damaging consequences on you -- to alert you and to give you a way to avoid the most damaging of the consequences at its disposal."
Many believe that one hope for our future is that the AI labs will makes some mistake that will kill many people, but not all of us, resulting in the survivors finally realizing how dangerous AI is. I wish people would refer to that as a "near miss", not a "warning shot". A warning shot is when the danger (originally a warship) actually cares about you but cares about its mission more, with the result that it complicates its plans and policies to try to keep you alive.
I am surprised by that because I've been avoiding learning about LLMs (including making any use of LLMs) till about a month ago, so it didn't occur to me that implementing this might have been as easy as adding to the system prompt instructions for what kinds of information to put in the contextual memory file.
This contextual memory file is edited by the user, never the AI?
In 2015, I didn't write much about AI on Hacker News because even just explaining why it is dangerous will tend to spark enthusiasm for it in some people (people attracted to power, who notice that since it is dangerous, it must be powerful). These days, I don't let that consideration stop me from write about AI.
Good points, which in part explains why I think it is very very unlikely that AI research can be driven underground (in the US or worldwide). I was speaking to the desirability of driving it underground, not its feasibility.
Regardless the reason why it is beneficial, I notice that almost all industrial respirators have valves, and the two respirators I know of that do not were designed by amateurs during the COVID pandemic.
That said, you've already purchased a respirator without a valve, so I would keep using it, but if you lose it or need to buy a new one for some reason, I'd go with one with a valve.