Steven Wolfram on AI Alignment
Joe Walker has a general conversation with Wolfram about his work and things and stuff, but there are some remarks about AI alignment at the very end: > WALKER: Okay, interesting. So moving finally to AI, many people worry about unaligned artificial general intelligence, and I think it's a risk we should take seriously. But computational irreducibility must imply that a mathematical definition of alignment is impossible, right? > WOLFRAM: Yes. There isn't a mathematical definition of what we want AIs to be like. The minimal thing we might say about AIs, about their alignment, is: let's have them be like people are. And then people immediately say, "No, we don't want them to be like people. People have all kinds of problems. We want them to be like people aspire to be. > > And at that point, you've fallen off the cliff. Because, what do people aspire to be? Well, different people aspire to be different and different cultures aspire in different ways. And I think the concept that there will be a perfect mathematical aspiration is just completely wrongheaded. It's just the wrong type of answer. > > The question of how we should be is a question that is a reflection back on us. There is no "this is the way we should be" imposed by mathematics. > > Humans have ethical beliefs that are a reflection of humanity. One of the things I realised recently is one of the things that's confusing about ethics is if you're used to doing science, you say, "Well, I'm going to separate a piece of the system," and I'm going to say, "I'm going to study this particular subsystem. I'm going to figure out exactly what happens in the subsystem. Everything else is irrelevant." > > But in ethics, you can never do that. So you imagine you're doing one of these trolley problem things. You got to decide whether you're going to kill the three giraffes or the eighteen llamas. And which one is it going to be? > > Well, then you realise to really answer that question to the best ability of huma
Panksepp is also known for discovering "laughter" in rats: "Laughing" rats and the evolutionary antecedents of human joy?
Abstract: Paul MacLean's concept of epistemics-the neuroscientific study of subjective experience-requires animal brain research that can be related to predictions concerning the internal experiences of humans. Especially robust relationships come from studies of the emotional/affective processes that arise from subcortical brain systems shared by all mammals. Recent affective neuroscience research has yielded the discovery of play- and tickle-induced ultrasonic vocalization patterns ( approximately 50-kHz chirps) in rats may have more than a passing resemblance to primitive human laughter. In this paper, we summarize a dozen reasons for the working hypothesis that such rat vocalizations reflect a type of positive affect that may have evolutionary relations to the joyfulness of human childhood laughter commonly accompanying social play. The neurobiological nature of human laughter is discussed, and the relevance of such ludic processes for understanding clinical disorders such as attention deficit hyperactivity disorders (ADHD), addictive urges and mood imbalances are discussed.