Unfriendly Natural Intelligence
Related to: UFAI, Paperclip maximizer, Reason as memetic immune disorder
A discussion with Stefan (cheers, didn't get your email, please message me) during the European Community Weekend Berlin fleshed out an idea I had toyed around with for some time:
If a UFAI can wreak havoc by driving simple goals to extremes then so should driving human desires to extremes cause problems. And we should already see this.
Actually we do.
We know that just following our instincts on eating (sugar, fat) is unhealthy. We know that stimulating our pleasure centers more or less directly (drugs) is dangerous. We know that playing certain games can lead to comparable addiction. And the recognition of this has led to a large number of more or less fine-tuned anti-memes e.g. dieting, early drug prevention, helplines. These memes steering us away from such behaviors were selected for because they provided aggregate benefits to the (members of) social (sub) systems they are present in.
Many of these memes have become so self-evident we don't recognize them as such. Some are essential parts of highly complex social systems. What is the general pattern? Did we catch all the critical cases? Are the existing memes well-suited for the task?How are they related. Many are probably deeply woven into our culture and traditions.
Did we miss any anti-memes?
This last question really is at the core of this post. I think we lack some necessary memes keeping new exploitations of our desires in check. Some new ones result from our society a) having developed the capacity to exploit them and b) the scientific knowledge to know how to do this.
Psychologist making pseudo-claim that recent works "compromise the Bayesian point of view"
I have recently been corresponding with a friend who studies psychology regarding human cognition and the best underlying models for understanding it. His argument, summarized very briefly, is given by this quote:
Lastly, there has been a huge amount of research over the last two decades that shows human reasoning is 1) entirely constituted by emotion, and that it is 2) mostly unconscious and therefore out of our control. A lot of this research has seriously compromised the Bayesian point of view. I am referring to work done by Antonio Damasio, who demonstrated the essential role emotion plays in decision making (Descartes' Error), Timothy Wilson, who demonstrated the vital role of the unconscious (Strangers to Ourselves), and Jonathan Haidt, who demonstrated how moral reasoning is dictated by intuition and emotion (The Emotional Dog and its Rational Tail). I could go on and on here. I assume that you are familiar with this stuff. I'd just like to know how you who respond to this work from the point of view of your studies (in particular, those two points). I don't mean to get in a tit for tat debate here, just want the other side of the story.
I am having trouble synthesizing a response that captures the Bayesian point of view (and is sufficiently backed up by sources so that it will be useful for my friend rather than just gainsaying of the argument) because I am mostly a decision theory / probability person. Are these works of psychology and neuroscience really illustrating that human emotion governs decision making? What are some good neuroscience papers to read that deal with this, and how do Bayesians respond? It may be that everything he mentions above is a correct assessment (I don't know and don't have enough time to read the books right now), but that it is irrelevant if you want to make good decisions rather than just accept the types of decisions we already make.
Emotional Installation of Software
I have recently been thinking about this question, "what is it exactly that helps install religious software so deeply and dogmatically into the brain?" Often those who are strongly religious fall into a few categories: (1) They were trained to believe in specific aspects of religion as children; (2) They entered into a very destitute part of their lives (i.e. severe depression, midlife crisis, loss of a job, death in the family, cancer, alcoholism, or other existential problems).
What strikes me about these situations is that emotion generally dominates the decision-making process. I remember when I was a child and attended church camp at the encouragement of my family I was heavily pressured by the camp counselors to "accept Christ" and I saw that there was a positive correlation between my willingness to accept Christ, memorize Bible verses, and say certain statements about behavior in the context of Christian morals and the way that the camp counselors, my extended family, and other adults would treat me. As a result, it was not until many years later that my preference for rationalism and science was able to fully crack that emotionally-founded religious belief installed in me as a child. I know many people for whom a similar narrative is true regarding experiences with alcohol, etc., though it seems to be rare for someone to completely dismiss deeply and emotionally held beliefs from their youth.
Emotion is something we have evolved to utilize. Generally speaking, we need emotion because we have to make split-second decisions sometimes in life and we don't have the opportunity to integrate our decision process on data. If someone attacks me I will become angry because anger will raise my adrenaline levels, temporarily reduce other biological needs like hunger or waste removal, and enable me to fight for survival. Essentially emotion is just a recorded previous decision that works on stereotypical data, or in probabilistic terms it is like basing a quick decision on solely the first moment of a bunch of previously experienced data. The first moment might not be the best descriptor of the data... but if you're in a computational bind you might not be able to do a whole lot better and you'll be biologically penalized for spending your CPU time trying to compute better descriptors of the data. But it is undeniable that decisions we all make based upon emotion are often some of the most powerful and deepest-seated beliefs that we have.
With religion this is especially true. Very religious people, in my view, have this software installed emotionally and then spend years practicing the art of pushing the installed software ever closer to the very act of perception itself, until at some point it is almost the case that sensory data is literally passed through a religious filter before it is even processed and presented for perception. A sunset becomes a symbol of God's love so much so that there is (almost) no physical distinction between the literal viewing of photons depicting the sunset scene and the thinking of the thought "This shows that God loves me." Emotionally installed software presents a very difficult problem. Depending on how close to the act of perception that it has been pushed, it implies there is a remarkably tiny window of opportunity for the presentation of data that could convincingly demonstrate that rational alternatives are better in a number of important senses.
I'm sure many of you have had debates where you've run into circular logic and unavoidable walls that stifle all useful discussion. Can we as a community come up with a good theory on how sensory data can help to uninstall deep emotionally installed software in someone's brain? I really feel that this is an area that deserves some philosophical attention. Is it the case that if software is installed in someone's brain in conjunction with emotion (and by this I literally mean that the cyclic AMP cycles and other biological processes used for memory formation are made stronger and synaptic connections related to the library of belief concepts (e.g. religious) are reinforced by chemicals released in conjunction with the emotive force of the experience in which they are formed) can only be uninstalled by a similarly impactful emotional experience? It appears that slow-moving rationality and logical discussion are almost physically powerless to succeed as convincing mechanisms. And if this is the case, what should rationalists do to promote their ideas (aside from the obvious social pressure to stop installing religious software in the minds of children, etc.)
Note that in the discussion above I use 'religion' as a specific example, but any irrationally held belief that derives from an emotionally impactful experience would serve the same purpose. And also, here we can assume 'religious' refers to ontological claims unsupported by any evidence and then purported to have day-to-day impacts on life and decision-making. I would be very grateful for any thoughts the community has and hopefully we can generate some useful techniques for understanding how to appropriately uninstall emotional software (in the instances when it's useful to do so)... even the kinds of emotional software that we ourselves (rationalists) often fall victim to in our own imperfect understanding of the world.
Leaving a line of retreat for theists
Eliezer recommends that we leave a line of retreat when discussing controversial topics, since this prevents scary propositions from clouding our judgment. However, I've noticed recently that there are some topics that are just too scary for people to think about, the existence of God being a primary example. Simply put, people don't want to admit that the universe is beyond the reach of a caring God, no matter how much evidence there is to the contrary. People especially don't want to hear that they will one day cease to exist, never to be reincarnated or continued in an afterlife. I've found this to be a major stumbling block when having discussions with theists or agnostics--though the people I've talked to are willing to accept that nonbelievers can lead very moral lives, the thought that "it's just us" is the stopsign that prevents the discussion from moving further. Naturally I've explained that it's important to only believe things that are true, but for some people this meme just can't overcome the scariness of a naturalistic universe.
Have any LessWrongians managed to overcome this obstacle? If so, how? We can generalize this problem somewhat: are there effective techniques for getting people to clearly evaluate the probability of scary or depressing propositions? Explanations with the smallest amount of inferential distance are preferred--while something like cryonics does answer most of the theistic objections raised above, it's a huge distance away from most people's belief systems. (That said, it's quite possible that the answer to my question might be "No, there are no effective techniques that have short inferential distances," and in the spirit of this post I'm willing to accept that.) I'd also be interested in hearing anecdotes about similar situations if anyone has any.
Link: Chessboxing could help train automatic emotion regulation
EDIT: Argh, I really failed to read this closely. Rewriting...
Just saw this over at Not Exactly Rocket Science. Chessboxing (or similar games) could help train automatic emotion regulation. Obviously this should generalize. Has this - by which I mean finding things that can help train automatic emotion regulation - been done before? This doesn't seem to be anything new - and this is extrapolation, not experimental results - but it's a neat application.
Help Request: How to maintain focus when emotionally overwhelmed
So my personal life just got very interesting. In a net-positive way, certainly, but still, I am, as Calculon put it, "filled with a large number of powerful emotions!" -- some of which are anxious and/or panicky.
This is making it annoyingly difficult to focus at work. I am an absolutely textbook "Attention Deficit Oh-look-a-squirrel!" case at the best of times, and this seems to have made it much, much worse. I can handle small tasks, but anything where I'm going to have to spend an hour solving multiple problems before producing results, I can hardly make myself start.
Has anyone dealt with the problem of maintaining productive focus while emotionally overwhelmed/exhausted, and if so, do you have any pointers?
How do autistic people learn how to read people's emotions?
From my understanding, people on the autism spectrum have difficulty reading people's emotions and general social cues. I'm curious how these people develop these skills and what one can do to improve them. I ask this as a matter of personal interest; while I am somewhat neurotypical, I feel this is an area where I am very lacking.
(Sidenote: would this be considered an appropriate used of the discussion section?)
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)