This post is shameless self-promotion, but I'm told that's probably okay in the Discussion section. For context, as some of you are aware, I'm aiming to model C. elegans based on systematic high-throughput experiments - that is, to upload a worm. I'm still working on course requirements and lab training at Harvard's Biophysics Ph.D. program, but this remains the plan for my thesis.
Last semester I gave this lecture to Marvin Minsky's AI class, because Marvin professes disdain for everything neuroscience, and I wanted to give his students—and him—a fair perspective of how basic neuroscience might be changing for the better, and seems a particularly exciting field to be in right about now. The lecture is about 22 minutes long, followed by over an hour of questions and answers, which cover a lot of the memespace that surrounds this concept. Afterward, several students reported to me that their understanding of neuroscience was transformed.
I only just now got to encoding and uploading this recording; I believe that many of the topics covered could be of interest to the LW community (especially those with a background in AI and an interest in brains), perhaps worthy of discussion, and I hope you agree.
Would you change your mind if I could give a precise definition of, say, "suffering", and showed you two paths to the future that end up with similar levels of technology but different amounts of suffering? I'll assume the answer is yes, because otherwise why did you give me that challenge.
What if I said that I don't know how to define it now, but I think if you made me a bit (or a lot) smarter and gave me a few decades of subjective time to work on the problem, I could probably give you such a definition and tell you how to achieve the "less suffering, same tech" outcome? Would you be willing to give me that chance (assuming it was in your power to do so)? Or are you pretty sure that "suffering" is not just hard to define, but actually impossible, and/or that it's impossible to reduce suffering to any significant extent below the default outcome, while keeping technology at the same level? If you are pretty sure about this, are you equally sure about every other value that I could cite instead of suffering?
Masochist: Please hurt me!
Sadist: No.
Not sure, but it might be impossible.
If... (read more)