Wiki Contributions

Comments

From what I've read of the source writings within the contemplative traditions, modern neuroscience studies and theories on meditation, and my own experiences and thoughts on the subject as well, I've come to view the practice of meditation as serving 3 different but interconnected purposes: 1.) ego loss, 2.) cultivation of compassion, and 3.) experience of non-dual reality.

Ego loss means inhibiting or eliminating the internal self-critic by changing the way you perceive the target of that critic, namely the concept of a stable 'self' that you identify with. Suffering is caused by the struggle of attachment, which is a manifestation of your internal self-critic admonishing your identified 'self' over future desires and past failures. By changing the concept you identify with as your 'self', you remove the object or target of the self-critical process and it fades away, allowing you to be free to live in the moment unhindered by such suffering. Meditation lessens the activation of what is called the default mode network, a network of neurons that extend through several anatomical brain regions and which the over-activation of is implicated in many mood disorders including anxiety, depression, ocd, among others.

Cultivating compassion, generally seen as a way to put yourself in another's shoes and wish others to be free from suffering and attain happiness, is also a way to achieve those things for your self as well. Compassion, once cultivated, recursively extends back onto yourself also, and allows you to be compassionate about your own life. Being loving and kind to yourself, not in a narcissistic way, but in an egoless, purely compassionate way as a sentient being among others, helps you remain in a positive mood and help others to do the same. Mirror neurons and the ability to form a theory of mind are hypothesized to be active in empathic cognition, but they may also prove to be the crucial substructure that allows us to have a theory of our own minds, the seat of the 'self' if you like, and so compassion is both an outward an inward directed process.

Lastly, experience of non-dual awareness is somewhat like the combination of the previous two effects, but taken further it inculcates a deep connection between you, others, and everything else in the universe as one unseparated whole. I've only experienced this sensation twice, but it is overwhelming and life changing.

In conclusion, meditation helps to develop and sustain these brain processes, but it is not easy, takes a lot of time, and may not even be effective for many people. I'm hoping that affective neuroscience and neurotechnology will progress to the level where meditation is unnecessary for achieving these states (though can and should be practiced for aesthetic reasons for those so inclined).

Wanted to add the insights of Neil Gershenfeld which I think is how we should frame these problems:

We've already had a digital revolution; we don't need to keep having it. The next big thing in computers will be literally outside the box, as we bring the programmability of the digital world to the rest of the world.

He was talking about personal fabrication in this context, but the 'digitization' of the physical world is applicable to the sustainability goals I mentioned. Using operations research, loosely-coupled distributed architectures, nature-inspired routing algorithms, and other tricks of the IT trade applied to natural resources, we can finally transition to a sustainable world.

Surprised no one has mentioned anything involving sustainable/clean tech (energy, food, water, materials). This site does stress existential threats, and I'd think that, given many (most?) societal collapses in the past were precipitated, at least partly, by resource collapse, I'd want to concentrate much of the startup activity around trying to disrupt our short-term wasteful systems. Large pushes to innovate and disrupt the big four (energy, food, water, materials) would do more than anything I can think of to improve the condition of our world and minimize the major risks confronting us within the next 100 years (or sooner).

It's not as hopeless as it appears on first glance. Population growth will reach about 9-10 billion people within 50 years (not much more do to lower birth rates as developing countries have less children and developed countries go into negative population growth) so that is the carrying capacity to aim for. Decoupling the big four from the unpredictability of scarcity, monocrops, climate change, and depletion/destruction by not only using innovations in the specific domains, but using advanced information technologies and algorithms (operations research, stigmergic routing, ..) would be the first time our planet is placed on a secure and sustainable foundation for our basic resource needs. If there is any other large, audacious goal that would change the world positively more than this (other than a positive singularity) I can't think of it.

Ok, so we can with confidence say that humans and other organisms with developed neural systems experience the world subjectively, maybe not exactly in similar ways, but conscious experience seems likely for these systems unless you are a radical skeptic or solipsist. Based on our current physical and mathematical laws, we can reductively analyse these systems and see how each subsystem functions, and, eventually, with sufficient technology we'll be able to have a map of the neural correlates that are active in certain environments and which produce certain qualia. Neuroscientists are on that path already. But, are only physical nervous systems capable of producing a subjective experience? If we emulate with enough precision a brain with sufficient input and output to an environment, computationalists assume that it will behave and experience the same as if it was a physical wetware brain. Given this assumption, we conclude that the simulated brain, which is just some machine code operating on transistors, has qualia. So now qualia is attributed to a software system. How much can we diverge from this perfect software emulation and still have some system that experiences qualia? From the other end, by building a cognitive agent piece-meal in software without reference to biology, what types of dynamics will cause qualia to arise, if at all? The simulated brain is just data, as is Microsoft Windows, but Windows isn't conscious, or so we think. Looking at the electrons moving through the transistors tells us nothing about what running software has qualia and what does not. On the other hand, It might be the case that deeper physics beyond the classical must be involved for the system to have qualia. In that case, classical computers will be unable to produce software that experiences qualia and machines that exploit quantum properties may be needed, this is still speculative, but the whole question of qualia is still speculative.

So now, when designing an AI that will learn and grow and behave in accordance with human values, how important is qualia for it to function along those lines? Can an unconscious optimizing algorithm be robust enough to act morally and shape a positive future for humanity? Will an unconscious optimizing algorithm, without the same subjectivity that we take for granted, be able to scale up in intelligence to the level we see in biological organisms, let alone humans and beyond, or is subjective experience necessary for the level of intelligence we have? If possible, will an optimizing algorithm actually become conscious and experience qualia after a certain threshold, and how does that affect its continued growth?

On a side note, my hypothetical friendly AGI project that would directly guarantee success without wondering about speculations on the limits of computation, qualia, or how to safely encode meta-ethics in a recursively optimizing algorithm, would be to just grow a brain in a vat as it were, maybe just neural tissue cultures on biochips with massive interconnects coupled to either a software or hardware embodiment, and design its architecture so that its metacognitive processes are hardwired for compassion and empathy. A bodhisattva in a box. Yes, I'm aware of all the fear-mongering regarding anthropomorphized AIs, but I'm willing to argue that the possibility space of potential minds, at least the ones we have access to create from our place in history, is greatly constricted and that this route may be the best, and possibly, the only way forward.

To summarize (mostly for my sake so I know I haven't misunderstood the OP):

  • 1.) Subjective conscious experience or qualia play a non-negligible role in how we behave and how we form our beliefs, especially of the mushy (technical term) variety that ethical reasoning is so bound up in.
  • 2.) The current popular computational flavor of philosophy of mind has inadequately addressed qualia in your eyes because the universality of the extended church-turing thesis, though satisfactorily covering the mechanistic descriptions of matter in a way that provides for emulation of the physical dynamics, does not tell us anything about what things would have subjective conscious experiences.
  • 3.) Features of quantum mechanics such as entanglement and topological structures in a relativistic quantum field provide a better ontological foundation for your speculative theories of consciousness which takes as inspiration phenomenology and a quantum mondadology.

EDIT: I guess the shortest synopsis of this whole argument is: we need to build qualia machines, not just intelligent machines, and we don't have any theories yet to help us do that (other than the normal, but delightful, 9 month process we currently use). I can very much agree with #1. Now, with #2, it is true that the explanatory gap of qualia does not yield to the computational descriptions of physical processes, but it is also true that the universe may just be constructed such that this computational description is the best we can get and we will just have to accept that qualia will be experienced by those computational systems that are organized in particular ways, the brain being one arrangement of such systems. And, for #3, without more information about your theory, I don't see how appealing to ontologically deeper physical processes would get you any further in explaining qualia, you need to give us more.

We have evolved moral intuitions such as empathy and compassion that underly what we consider to be right or wrong. These intuitions only work because we consciously internalize another agent's subjective experience and identify with it. In other words, without the various quales that we experience we would have no foundation to act ethically. An unconscious AI that does not experience these quales could, in theory, act the way we think it should act by mimicking behaviors from a repertoire of rules (and ways to create further rules) that we give it, but that is a very brittle and complicated route, and is the route the SIAI has been taking because they have discounted qualia, which is what this post is really all about.

"How an algorithm feels from inside" discusses a particular quale, that of the intuitive feeling of holding a correct answer from inside the cognizing agent. It does not touch upon what types of physically realizable systems can have qualia.

"If everything real is made of physics, you still must either explain how certain patterns of neuronal excitations are actually green, or you must assert that nothing is actually green at any level of reality."

This is a 'why' question, not a 'how' question, and though some 'why' questions may not be amenable to deeper explanations, 'how' questions are always solvable by science. Explaining how neuronal patterns generate systems with subjective experiences of green is a straightforward, though complex, scientific problem. One day we may understand this so well that we could engineer quales on demand, or create new types of never before seen quales according to some transformation rules. However, explaining 'why' such arrangements of matter should possess such interiority or subjectivity is, I think at least based on everything we currently know, unanswerable.

In my opinion, the most relevant article was from Drew McDermott, and I'm surprised that such an emphasis on analyzing the computational complexity of approaches to 'friendliness' and self-improving AI has not been more common. For that matter, I think computational complexity has more to tell us about cognition, intelligence, and friendliness in general, not just in the special case of a self-improving optimization/learning algorithms, and could completely modify the foundational assumptions underlying ideas about intelligence/cognition and the singularity.

I'm thinking of specific concepts by Yudkowsky and others in the singularity/FAI crowd that seem uncontroversial at first glance, but upon further investigation, when analyzed in the light of computational complexity, become unconvincing. One example of this is the concept of the possibility space of minds that is an assumption propping up many of the arguments for the negative consequences of careless AI engineering. When seen from the perspective of computability, that possibility space does represent the landscape of theoretically possible intelligent agents, and at first glance, those sensitive and wise enough to care about where in that landscape most outcomes of successful AI engineering projects will be located are alarmed at the needle in the haystack that is our target for a positive outcome. But, if you put on your computational complexity hat and start to analyze not just particular algorithms representing AI systems themselves, but the engineering processes that work towards outputting those AI agents/systems, a very different landscape takes shape, one that drastically constrains the space of possible minds that are a.) of a comparable cognitive class with humans, and b.) have a feasible engineering approach on a timescale T < heat death of our universe. I'm including the evolution of natural history on earth within the set of engineering processes that output intelligence mentioned above

This is but one example of how the neglect of computational complexity, and, to be frank, the neglect of time as a very important factor overall, has influenced the thinking of the SIAI/Lesswrong et al crowd. This neglect leads to statements such as the one Yudkowsky made that an AI could be programmed on a desktop computer circa early 2000s which I am extremely incredulous of. It also leads to timeless decision theories which I don't feel will be of much importance. Scott Aaronson has made a career out of stressing computational complexity for understanding the deep nature of quantum mechanics, and this should apply to all natural phenomena, cognition and AI among them.

Load More