Wiki Contributions

Comments

Sorted by

A new study was published today with results that contradict those found in the UCSF study that you've written about.

Human Hippocampal Neurogenesis Persists Throughout Aging

The Sorrells et al study is directly mentioned twice in this paper. The first claim is that the study failed to address medicaton and drug use, which impact adult hippocampal neurogenesis.

The density of doublecortin-positive (DCX+) cells were re- ported to decline from birth into the tenth decade of life (Knoth et al., 2010) in parallel with 14C-determined neuron turnover(Bergmann et al., 2015); however, medication and drug use, which affect AHN (Boldrini et al., 2014), were not addressed (Knoth et al., 2010; Sorrells et al., 2018; Spalding et al., 2013).

The second, more interesting claim is that the Sorrells et al study only looked at incredibly tiny slices of the dentate gyrus section of the hippocampus, and that the conditions of preservation for these samples likely impacted ability to detect neurogenesis.

Direct comparison between our data and Knoth’s data is not possible because they analyzed only three 5 µm sections per subject from portions of the hippocampus, treated tissue at 80◦C for 1 hr and at low pH to obtain deparaffinization and antigen retrieval, and assessed cell density without using stereology, which is the gold standard, given that cell density does not necessarily reflect total cell number (West and Gundersen, 1990). For the same reasons, we cannot compare our findings with those of a recent descriptive study that failed to detect DCX/PSA-NCAM+ cells in the DG from 15 subjects between 18 and 77 years of age (Sorrells et al., 2018)

According to this article in today's LA Times, the UCSF group responded to the study.

In an email statement, that group, which works out of developmental neuroscientist Arturo Alvarez-Buylla's lab, said that while they found the new study's evidence of declining blood vessel growth in the adult hippocampus interesting, they are not convinced that Boldrini and her colleagues found conclusive evidence of adult neurogenesis.

"Based on the representative images they present, the cells they call new neurons in the adult hippocampus are very different in shape and appearance from what would be considered a young neuron in other species, or what we have observed in humans in young children," they wrote.

They added that in their study, they looked not just at protein markers associated with different types of cells, as Boldrini and her team did, but also performed careful analysis of cell shape and structure using light and electron microscopes.

"That revealed that similarly labeled cells in our own adult brain samples proved to be neither young neurons nor neural progenitors, but rather non-neuronal glial cells expressing similar molecular markers," they wrote.

The times article continues to describe Boldrini's response, which touches upon the parts of her paper that I cited earlier.

Boldrini points out that the two groups were working with very different samples.

She and her team examined more than two dozen flash-frozen human brains, which were donated by families of the deceased at the time of death. The brains were immediately frozen and stored at minus-112 degrees Fahrenheit, which keeps the tissue from degrading.

The other research team received brain samples from hospitals in China, Spain and the U.S., and the brain tissue they examined had not been preserved in the same way. Boldrini said the chemicals that were used to fix the brains could have interfered with their ability to detect new neurons.

She also noted that while both groups were looking for signs of neurogenesis in the hippocampus region of the brain, her group had access to the entire hippocampus while the UCSF team was looking at thin slices of the tissue representing a small fraction of the brain.

Thank you for your input, I found it very informative!

I agree with your point that any aligned AI will be 100% on board with avoiding value drift, and that certainly does take pressure off of us when it comes to researching this. I also agree that it would be best to avoid this scenario entirely and avoid having a self-improving AI touch its value function at all.

In cases where a self-improving AI can alter its values, I don’t entirely agree that this would only be a concern at subhuman levels of intelligence. It seems plausible to me that an AI of human level intelligence, and maybe slightly higher, could think that marginally adjusting a value for improved performance is safe, only to be wrong about that. From a human perspective, I find it very difficult to reason through how slightly altering one of my values would impact my reflective reasoning about the importance of that value and the acceptable ranges it could take. A self-improving agent would also have to make this prediction about a more intelligent version of itself, with the added complication of calculating potential impact for future iterations as well. It’s possible that an agent of human level intelligence would be able to do this easily, but I’m not entirely confident of that.

And the main reason that I bring up the scenario of self-improving AI with access to its own values is that I see this as a clear path to performance improvement that might seem deceptively safe to some organizations conducting general AI research in the future, especially those where external incentives (such as an international General AI arms race) might push researchers to take risks that they normally wouldn’t take in order to beat the competition. If a general AI was properly aligned, I could see certain organizations allowing that AI to improve itself through marginally altering its values out of fear that a rival organization would do the same.

I’m going to reflect upon what you said in more depth though. Since I’m still new to all of this, it’s very possible that there is relevant external information that I’m missing or not considering thoroughly.