Comment author: NxGenSentience 26 September 2014 01:34:42AM 5 points [-]

I love this question. As it happens, I wrote my honors thesis on the mind-body problem (while I was a philosophy and math double-major at UC Berkeley), and have been passionately interested in consciousness, brains (and also AI) ever since (a couple decades.)

I will try to be self-disciplined and remain as agnostic as I can – by not steering you only toward the people I think are more right (or “less wrong”.) Also, I will resist the tendency to write 10 thousand word answers to questions like this (which in any case would still barely scratch the surface of the body of material and spectrum of theory and informed opinion.)

I have skimmed the answers already given, and I think the ones I have read on this page are very good, and also, as intellectually honest and agnostic, as one would expect of the high caliber folks on this site.

Perhaps I should just give a somewhat meta-data answer to your question, and maybe I will add something specific later on, after I have a chance to look up some links and bookmarks I have in mind (which are distributed among several laptops, cloud drives, desktop machines, my smartphone and my Ipad, plus the stacks of research paper hardcopies I have all over my living space.)

The “meta-data”, or, strategic and supportive advice, would include the following.

1) Congratulations on your interest in the most fascinating, central, interdisciplinary, intellectually rich and fertile, and copiously addressed scientific, philosophical, and human nature question, of all. 2) Be aware that you are jumping into a very, very big intellectual ocean. You could fill a decent sized library with books and journals, or a terabyte hard drive with electronic copies of the same sources, and it is now more popular then ever in more disciplines than formerly would take up the question. (For example of the latter, hard-core neurologists – clinical and research – and bench-level working lab neurobiologists, are publishing routinely some amazing papers seeking to pin down, or theorize, or otherwise shed light on “the issue of consciousness.” 3) Give yourself a year (or 10) -- but it will be an enjoyable year (or 10) -- to read widely, think hard, and keep looking around at new theories, authors, papers. I think it is fair to say that no one has “the answer” yet, but there are excellent and amazingly imaginative proposed answers, and some of them are likely to be significantly close to being at least on the right track. After a year or more, you will begin to develop a sense of the kinds of answer that have more or less merit, as your intuitions will sharpen, and you build up new layers of understanding. 4) Be intellectually "mobile." Look everywhere… Amazon, the journals, PubMed, the Internet Encyclopedia of Philosophy, the Stanford Encyclopedia of Philosophy (just Google them, they have great summaries) and various cognitive science sub collections.

The good news is nearly everything you need to conduct any level of research, is online for free -- in case you don’t have a fortune to spend on books.

Lastly, as it happens, something for down the road a couple months, I am in the process of setting up a couple of YouTube channels, which will have mini-courses of lectures on certain special application areas, like AI, as well as general introductions to the mind-body problem, and its different guises. It will take me a couple months to go live with the videos, but they should be helpful as well. I intend to have something for all levels of expertise. But that is in the future. (Not a commercial announcement at all... it will be a free and open presentation of ideas -- a vlog, but done a bit more rigorously.)

It is my view that most introductory and some sophisticated aspects of the “mind-body problem” -- at least: why there is one and what forms it takes and which different, unavoidable lines of thought land us there -- can be explained by a good tutor, to any intelligent layperson. (I think there is room to improve on the job of posing the problem and explaining its ins and outs, over ways it is done by many philosophy and cognitive science instructors, which is why I will be creating the video sequences.)

But, in general, you are in for quite an adventure. Keep reading, keep Googling. The resources available are almost boundless, and growing rapidly.

We are in the best time so far, in all of human history, for someone to be interested in this question. And it touches on almost every branch of human knowledge or thought, in some way… from ethics, to interpretations of quantum mechanics.

Maybe you, or one of us in here, will be the “clerk working in a patent office” that connects the right combination of puzzle pieces, and adds a crucial insight, that dramatically advances our understanding of consciousness, in a definitive way.

Enjoy the voyage…

Comment author: mgg 01 October 2014 12:35:42AM 0 points [-]

That sort of confirms my suspicion - that it's a very active topic. And it's not necessarily easy to break into. I was hoping there was a good pop-sci summary book that laid things out real nicely. Like what The Selfish Gene does for evolution. But I read the book Blindsight, and am now reading Metzinger's The Ego Tunnel, just because it seemed incredibly interesting. So who knows how deep this will go for me :)

Comment author: [deleted] 24 September 2014 08:10:15AM *  7 points [-]

Blindsight is an excellent hard sci-fi novel which you might want to consider reading if you like that sort of thing, and I'll say no more about it.

If you liked Blindsight's ideas, you should definitely try to read Being No One: The Self-Model Theory of Subjectivity by Thomas Metzinger. Apparently Blindsight was heavily inspired by it. This is what the author has to say about it:

Let's get the biggies out of the way first. Metzinger's Being No One is the toughest book I've ever read (and there are still significant chunks of it I haven't), but it also contains some of the most mindblowing ideas I've encountered in fact or fiction. Most authors are shameless bait-and-switchers when it comes to the nature of consciousness. Pinker calls his book How the Mind Works, then admits on page one that "We don't understand how the mind works". Koch (the guy who coined the term "zombie agents") writes The Quest for Consciousness: A Neurobiological Approach, in which he sheepishly sidesteps the whole issue of why neural activity should result in any kind of subjective awareness whatsoever.

Towering above such pussies, Metzinger takes the bull by the balls. {Spoilers for Blindsight, use rot13}: Uvf "Jbeyq-mreb" ulcbgurfvf abg bayl rkcynvaf gur fhowrpgvir frafr bs frys, ohg nyfb jul fhpu na vyyhfbel svefg-crefba aneengbe jbhyq or na rzretrag cebcregl bs pregnva pbtavgvir flfgrzf va gur svefg cynpr. I have no idea whether he's right— the man's way beyond me— but at least he addressed the real question that keeps us staring at the ceiling at three a.m., long after the last roach is spent. Many of the syndromes and maladies dropped into Blindsight I first encountered in Metzinger's book. Any uncited claims or statements in this subsection probably hail from that source.

In response to comment by [deleted] on Books on consciousness?
Comment author: mgg 01 October 2014 12:32:03AM 0 points [-]

Well Blindsight impressed me enough, that I've started The Ego Tunnel. In short, the idea of unconscious intelligence bothered me. My intuition says that consciousness could be what happens when something tries to model its intelligence and actions, but of course that hardly explains anything. While I feel like it's unlikely I'll find many good answers, it is interesting enough to be enjoyable to read.

Comment author: Algernoq 24 September 2014 02:17:02AM 3 points [-]

Definitely Godel, Escher, Bach if you haven't already.

Consciousness is pretty damn weird and no one seems to have much of a handle on it

That sums up the current state of knowledge. What does it mean to be an "observer"?

I assume by "consciousness" you mean the hard problem of consciousness, i.e. why do I have subjective awareness at all. The "easy" problem, how other people's brains cause them to do stuff, is fairly well-covered by standard neuroscience texts.

Comment author: mgg 24 September 2014 08:34:40PM 3 points [-]

That sums up the current state of knowledge

Which was sort of my question: Do I have a whole lot to gain by reading the current information available? Will I obtain valuable insights on things, or even be rather entertained? Or am I just gonna end up in the same place, but with a deeper respect for how difficult it is to figure things out?

Books on consciousness?

8 mgg 23 September 2014 10:28PM

Does LW have a consensus on which books are worthwhile to read regarding consciousness? I read a small intro (Consciousness: A Very Short Introduction, Susan Blackmore, Oxford University Press), and the summary seems to be "Consciousness is pretty damn weird and no one seems to have much of a handle on it". As a non-technical layman, are there any useful books for me to read on the subject?

(I have started reading Daniel Dennet's Intuition Pumps, and I'm a bit torn. He seems highly respected by good scientists, but I feel that if the book didn't have his name on it, I would be well on my way to dismissing it. Are Dennet's earlier works on consciousness a good read?)

Comment author: Bakkot 14 September 2014 05:01:05PM 2 points [-]

It is if we define a utility function with a strict failure mode for TotalSuffering > 0.

Yeah, but... we don't.

(Below I'm going to address that case specifically. However, more generally, defining utility functions which assign zero utility to a broad class of possible worlds is a problem, because then you're indifferent between all of them. Does running around stabbing children seem like a morally neutral act to you, in light of the fact that doing it or not doing it will not have an effect on total utility (because total suffering will remain positive)? If no, that's not the utility function you want to talk about.)

Anyway, as far as I can tell, you've either discovered or reinvented negative utilitarianism. Pretty much no one around here accepts negative utilitarianism, mostly on the grounds of it disagreeing very strongly with moral intuition. (For example, most people would not regard it as a moral act to instantly obliterate Earth and everyone on it.) For me, at least, my objection is that I prefer to live with some suffering than not to live at all - and this would be true even if I was perfectly selfish and didn't care what effects my death would have on anyone else. So before we can talk usefully about this, I have to ask: leaving aside concerns about the effects of your death on others, would you prefer to die than to live with any amount of suffering?

Comment author: mgg 23 September 2014 10:22:44PM 0 points [-]

Thanks for the reply. Yes I found out the term is "negative utilitarianism". I suppose I can search and find rebuttals of that concept. I didn't mean that the function was "if suffering > 0 then 0", just that suffering should be a massively dominating term, so that no possible worlds with real suffering outrank worlds with less suffering.

As to your question about my personal preference on life, it really depends on the level of suffering. At the moment, no, things are alright. But it has not always been that way, and it's not hard to see it crossing over again.

I would definitely obliterate everyone on Earth, though, and would view not doing so, if capable, to be immoral. Purely because so many sentient creatures are undergoing a terrible existence, and the fact that you and me are having an alright time doesn't make up for it.

Comment author: Viliam_Bur 06 September 2014 04:52:46PM *  2 points [-]

So, you say you have a "preference not to suffer" for everyone, but "preference not to die" only for a few people, if I read it correctly.

When you are asking how someone can have a "preference not to die" for everyone, I think you should also ask how you have a "preference not to suffer" for everyone, because to me it seems rather similar. I mean, the part of "preference not to ... for everyone" is the same, so we can ask whether this is realistic, or is just some kind of illusion, to create a better self-image. The difference between wanting someone not to suffer and not to die does not seem so big to me, knowing that many people prefer not to die, and that the idea that they will die causes them suffering.

Another thing is the technical limitation of the human brain. If a death or a suffering of one person causes you some amount of sadness (whether we measure it by neurons firing, or by hormones in blood), of course a death or suffering of million people cannot cause you million times more neuron signals or hormones, because such thing would kill you instantly. The human brain does not have the capacity to multiply this.

But for a transhumanist this is simply a bug in the human brain. What our brains do is not what we want them to do. It is not "what my brain does, is by definition what I think is correct". We are here to learn about biases and try to fix them. The human brain's inability to properly multiply emotions is simply yet another such bias. The fact that my brain is unable to care about some things (on the emotional level) does not mean that I don't. It merely means that currently I don't have the capacity to feel it on the gut level.

Comment author: mgg 06 September 2014 09:06:29PM 2 points [-]

Good points. But I'm thinking that the pain of death is purely because of the loss others feel. So if I could eliminate my entire family and everyone they know (which ends up pulling essentially every person alive into the graph), painlessly and quickly, I'd do it.

The bug of scope insensitivity doesn't apply if everyone gets wiped out nicely, because then the total suffering is 0. So, for instance, grey goo taking over the world in an hour - that'd cause a spike of suffering, but then levels drop to 0, so I think it's alright. Whereas an asteroid that kills 90% of people, that'd leave a huge amount of suffering left for the survivors.

In short, the pain of one child dying is the sum of the pain others feel, not an intrinsic to that child dying. So if you shut up and multiply with everyone dying, you get 0. Right?

Comment author: TsviBT 06 September 2014 09:09:09AM 2 points [-]

All else being equal, if you have the choice, would you pick (a) your son/daughter immediately ceases to exist, or (b) your son/daughter experiences a very long, joyous life, filled with love and challenge and learning, and yes, some dust specks and suffering, but overall something they would describe as "an awesome time"? (The fact that you might be upset if they ceased to exist is not the point here, so let it be specified that (a) is actually everyone disappearing, which includes your child as a special case, and likewise (b) for everyone, again including your child as a special case.)

Comment author: mgg 06 September 2014 08:59:54PM 2 points [-]

If the suffering "rounds down" to 0 for everyone, sure, A is fine. That is, a bit of pain in order to keep Fun. But no hellish levels of suffering for anyone. Otherwise, B. Given how the world currently looks, and MWI, it's hard to see how it's possible to end up with everyone having pain that rounds down to 0.

So given the current world and my current understanding, if someone gave me a button to press that'd eliminate earth in a minute or so, I'd press it without hesitation.

Comment author: chaosmage 05 September 2014 06:27:17PM *  5 points [-]

People, by and large, appear to favor suffering over suicide. I don't think it can be ethical to overrule that choice.

Comment author: mgg 06 September 2014 08:56:21PM 1 point [-]

It is if we define a utility function with a strict failure mode for TotalSuffering > 0. Non-existent people don't really count, do they?

Comment author: polymathwannabe 05 September 2014 06:38:28PM 3 points [-]

Your original post says,

the logical conclusion is that we should completely destroy the universe, in a quick and painless manner

Would you please describe the sequence of thoughts leading to that conclusion?

Comment author: mgg 06 September 2014 08:55:14PM 2 points [-]

Sure. Goal is to make TotalSuffering as small as possible, where each individual Suffering is >= 0. There may be some level of individual Suffering that rounds down to zero, like the pain of hurting your leg while trying to run faster, or stuff like that. The goal is to make sure no one is in real suffering, not eliminate all Fun.

One approach to do that is to make sure everyone is not suffering. That entails a gigantic amount of work. And if I understand MWI, it's actually impossible, as branches will happen creating a sort of hell. (Only considering forward branches.) Sure, it "all averages out to normal", but tell that to someone in a hell branch.

The other way is to eliminate all life (or the universe). Suffering is now at 0, an optimal value.

Comment author: polymathwannabe 05 September 2014 02:21:05PM 3 points [-]

Living among billions of happy people who have realistic chances to meet their goals is a world I find much more desirable than a world where my friends and I are the only successful people in existence.

On one hand, there's the cold utilitarian who only values other lives inasmuch as they further hir goals, and assigns no intrinsic worth to whichever goals they may have for themselves. This position does not coincide, but overlaps, with solipsism. On the other hand, there's what we could call the naïve Catholic who holds that more life is always better life, no matter in what horrid conditions. This position does not coincide, but overlaps, with panpsychism.

The strong altruistic component of EY's philosophy is what sets it on a higher moral ground than Ayn Rand's. For all her support of reason, Rand's fatal flaw was that she failed to grasp the need for altruism; it was anathema to her, even if her brand of selfishness was strange in that she recognized other people's right to be selfish too (the popular understanding of selfishness is more predatory than even she allowed).

EY agrees with Rand's position that every mind should be free to improve itself, but he doesn't dismiss cooperation. It makes perfect sense: The ferociously competitive realm of natural selection does often select for cooperation, which strongly suggests it's a useful strategy. I can't claim to divine his reasons, but the bottom line is that EY gets altruism.

(As chaosmage suggested, it is not impossible that EY merely pretends to be an altruist so people will feel more comfortable letting him talk his way into world domination (ahem, optimization), but the writing style of his texts about the future of humanity and about how much it matters to him is likelier if he really believes what he says.)

Still, the question stands: Why care about random people? I notice it's difficult for me to verbalize this point because it's intuitively obvious to me, so much so that my gut activates a red alarm at the sight of a fellow human who doesn't share that feeling.

Whence empathy? Although empathy has a long tradition of support in many philosophies, antiquity alone is not a valid argument. Warfaring chimpanzees share as much DNA with us as hippie bonobos; mirror neurons are not conclusively proven to exist; and disguised sociopathy sounds like an optimal strategy.

Buddhism has a concept that I find highly appealing. It's called metta and it basically states that sentient beings' preference for not suffering is one you can readily agree with because you're a sentient being too. There are several ways to express the same idea in contemporary terms: We're all in this together, we're not so different, and other feel-good platitudes.

We can go one step further and assert this: A world where only some personal sets of preferences get to be realized runs the risk of your preferences being ignored, because there's no guarantee that you will be the one who decides which preferences are favored; whereas a world where all personal sets of preferences are equally respected is the one where yours have the best chance of being realized. To paraphrase the Toyota ads, what's good for the entire world is good for you.

(I know most LWers will demand a selfish justification for altruism because any rational decision theory will require it, but I feel hypocritical having to provide a selfish argument for altruism. Ideally, caring for others shouldn't need to be justified by resorting to an expected personal benefit, but I acknowledge that trying to advance this point is like trying to show a Christian ascetic that hoping to get to heaven by renouncing worldly pleasures is the epitome of calculated hedonism. I still haven't resolved this contradiction, but fortunately this is the one place in all the Internet where I can feel safe expecting to be proved wrong.)

Comment author: mgg 05 September 2014 06:13:11PM 1 point [-]

But he views extinction-level events as "that much worse" than a single death. But is an extinction-level event that bad? If everyone gets wiped out, there's no suffering left.

I'm not against others being happy and successful, and sure, that's better than them not being. But I seem to have no preference for anyone existing. Even myself, my kids, my family - if I could, I'd erase the entire lot of us, but it's just not practical.

View more: Next