Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
Crossposted at the Intelligent Agents Forum.
It should be noted that the colloquial "AI hacking a human" can mean three different things:
- The AI convinces/tricks/forces the human to do a specific action.
- The AI changes the values of the human to prefer certain outcomes.
- The AI completely overwhelms human independence, transforming them into a weak subagent of the AI.
Different levels of hacking make different systems vulnerable, and different levels of interaction make different types of hacking more or less likely.
Baboons... literally have been the textbook example of a highly aggressive, male-dominated, hierarchical society. Because these animals hunt, because they live in these aggressive troupes on the Savannah... they have a constant baseline level of aggression which inevitably spills over into their social lives.
Scientists have never observed a baboon troupe that wasn't highly aggressive, and they have compelling reasons to think this is simply baboon nature, written into their genes. Inescapable.
Or at least, that was true until the 1980s, when Kenya experienced a tourism boom.
Sapolsky was a grad student, studying his first baboon troupe. A new tourist lodge was built at the edge of the forest where his baboons lived. The owners of the lodge dug a hole behind the lodge and dumped their trash there every morning, after which the males of several baboon troupes — including Sapolsky's — would fight over this pungent bounty.
Before too long, someone noticed the baboons didn't look too good. It turned out they had eaten some infected meat and developed tuberculosis, which kills baboons in weeks. Their hands rotted away, so they hobbled around on their elbows. Half the males in Sapolsky's troupe died.
This had a surprising effect. There was now almost no violence in the troupe. Males often reciprocated when females groomed them, and males even groomed other males. To a baboonologist, this was like watching Mike Tyson suddenly stop swinging in a heavyweight fight to start nuzzling Evander Holyfield. It never happened.
Further discussion of CFAR’s focus on AI safety, and the good things folks wanted from “cause neutrality”
In the days since we published our previous post, a number of people have come up to me and expressed concerns about our new mission. Several of these had the form “I, too, think that AI safety is incredibly important — and that is why I think CFAR should remain cause-neutral, so it can bring in more varied participants who might be made wary by an explicit focus on AI.”
I would here like to reply to these people and others, and to clarify what is and isn’t entailed by our new focus on AI safety.
In 2007, psychology researchers Michal Kosinski and David Stillwell released a personality testing app on Facebook app called myPersonality. The app ended up being used by 4 million Facebook users, most of whom consented to their personality question answers and some information from their Facebook profiles to be used for research purposes.
The very large sample size and matching data from Facebook profiles make it possible to investigate many questions about personality differences that were previously inaccessible. Koskinski and Stillwell have used it in a number of interesting publications, which I highly recommend (e.g. ,  ).
In this post, I focus on what the dataset tells us about how big five personality traits vary by geographic region in the United States.
Drugs that affect the nervous system get administered systemically. It's easy to imagine that we could do much more if we could stimulate one nerve at a time, and in patterns designed to have particular effects on the body.
"Neural coding" can detect the nerve impulses that indicate that a paralyzed person intends to move a limb, and build prosthetics that respond to the mind the way a real limb would. A company called BrainGate is already making these. You can see a paralyzed person using a robotic arm with her mind here.
A fair number of diseases that don't seem "neurological", like rheumatoid arthritis and ulcerative colitis, can actually be treated by stimulating the vagus nerve. The nervous system is tightly associated with the immune and endocrine systems, which is probably why autoimmune diseases are so associated with psychiatric comorbidities; it also means that the nervous system might be an angle towards treating autoimmune diseases. There is a "cholinergic anti-inflammatory pathway", involving the vagus nerve, which inactivates macrophages when they're exposed to the neurotransmitter acetylcholine, and thus lessens the immune response. Turning this pathway on electronically is thus a prospective treatment for autoimmune or inflammatory diseases. Vagus nerve stimulation has been tested and found successful in rheumatoid arthritis patients, in rat models of inflammatory bowel disease, and in dog experiments on chronic heart failure; vagus nerve activity mediates pancreatitis in mice; and vagus nerve stimulation attenuates the inflammatory response (cytokine release and shock) to the bacterial poison endotoxin.
We'd need much more detailed maps of where exactly nerves innervate various organs and which neurotransmitters they use; we'd need to record patterns of neural activity to detect which nerve signals modulate which diseases and experimentally determine causal relationships between neural signals and organ functions; we'd need to build small electronic interfaces (cuffs and chips) for use on peripheral nerves; we'd need lots of improvements in small-scale and non-invasive sensor technology (optogenetics, neural dust, ultrasound and electromagnetic imaging); and we'd need better tools for real-time, quantitative measurements of hormone and neurotransmitter release from nerves and organs.
A lot of this seems to clearly need hardware and software engineers, and signal-processing/image-processing/machine-learning people, in addition to traditional biologists and doctors. In the general case, neural modulation of organ function is Big Science in the way brain mapping or genomics is. You need to know where the nerves are, and what they're doing, in real time. This is likely going to need specialized software which outpaces what labs are currently capable of.
Bioelectronics seems potentially important not just for disease treatment today, but for more speculative goals like brain uploads or intelligence enhancement. It's a locally useful step along the path of understanding what the brain is actually doing, at a finer-grained level than the connectome alone can indicate, which may very well be relevant to AI.
It's tricky for non-academic software people (like myself and many LessWrong readers) to get involved in biomedical technology, but I predict that this is going to be one of the opportunities that needs us most, and if you're interested, it's worth watching this space to see when it gets out of the stage of university labs and DARPA projects and into commercialization.
View more: Next