Downvoted as I find this comment uncharitable and rude.
(The link for the bluetooth keyboard from your blog is broken / or the keyboard is missing)
Maybe the V1 dopamine receptors are simply useless evolutionary leftovers (perhaps it's easier from a developmental perspective)
A taxonomy of objections to AI Risk from the paper:

What sort of epistemic infrastructure do you think is importantly missing for the alignment research community?
What are the best examples of progress in AI Safety research that we think have actually reduced x-risk?
(Instead of operationalizing this explicitly, I'll note that the motivation is to understand whether doing more work toward technical AI Safety research is directly beneficial as opposed to mostly irrelevant or having second-order effects. )
The (meta-)field of Digital Humanities is fairly new. TODO: Estimating its success and its challenges would help me form a stronger opinion on this matter.
One project which implements something like this is 'Circles'. I remember it was on hold several years ago but seems to be running now - link
Thoughts that come to mind: