Comment author: AndreInfante 16 October 2015 07:53:43AM 2 points [-]

According to the PM I got, I had the most credible vegetarian entry, and it was ranked as much more credible than my actual (meat-eating) beliefs. I'm not sure how I feel about that.

Comment author: V_V 17 September 2015 02:44:13PM 5 points [-]

Straw man. Connectonomics is relevant to trying to explain the concept of uploading to the lay-man. Few cryonics proponents actually believe it's all you need to know to reconstruct the brain.

I don't think so, Cryonics is predicated upon the hypothesis that the fine structural details which probably can't be preserved with current methods are not important to reconstruct personal identity.

The fact that someone can be dead for several hours and then be resuscitated, or have their brain substantially heated or cooled without dying

I don't think that substantial heating is survivable. Where did you get that information?

Anyway, the type of disruptions that occur to brain tissue during cryopreservation (hours-long warm ischemia, cryoprotectant damage and ice crystal formation and thermal shearing) are very different than those which occur in all known survivable events. Warm ischemia can be reduced with prompt cryopreservation (but even in the highly publicized case of Kim Suozzi, where death was expected and a lot of preparation took place, they still couldn't avoid it), it's unclear how much ice crystal formation can be reduced (it's believed to also depend on cryopreservation promptness, but that's more speculative) and cryoprotectant damage and thermal shearing are currently unavoidable.

This is not, to the best of my knowledge, true, and he offers no evidence for this claim. Cryonics does a very good job of preserving a lot of features of brain tissue.

You are reversing the burden of evidence. It's the cryonics supporters that have to provide evidence that cryonics preserves relevant brain tissue features. To my knowledge, this evidence seems to be scarce or absent. AFAIK, no cryopreserved human brain, or a brain of comparable size, was ever analyzed. There were some studies done by ALCOR on dog brains, but these were never replicated by independent researchers. Dog brains, anyway, are smaller and hence easier to vitrify than human brains.

Comment author: AndreInfante 19 September 2015 08:00:14AM *  3 points [-]

I feel like the dog brain studies are at least fairly strong evidence that quite a bit of information is preserved. The absence of an independent validation is largely down to the poor mainstream perception of cryonics. It's not that Alcor is campaigning to cover up contrary studies - it's that nobody cares enough to do them. Vis-a-vis the use of dogs, there actually aren't that many animals with comparable brain volume to humans. I mean, if you want to find an IRB that'll let you decorticate a giraffe, be my guest. Dogs are a decent analog, under the circumstances. They're not so much smaller you'd expect drastically different results.

In any case, if this guy wants to claim that cryonics doesn't preserve fine-grained brain detail, he can do the experiment and prove it. You can't just point at a study you don't like and shout 'the authors might be biased' and thus refute its claim. You need to be able to provide either serious methodological flaws, or an actual failure to replicate.

Comment author: CellBioGuy 16 September 2015 12:50:14PM *  3 points [-]

Whether that's a showstopper depends on how important you think the fine-grained structure of white matter is.

Considering that damaging large amounts of white matter gives you things like lobotomy and alien hand syndrome, or sensorimotor impairment, and that subcortical structures are vitally important...

Comment author: AndreInfante 16 September 2015 07:55:45PM 3 points [-]

Sorry, I probably should have more more specific. What I should really say is 'how important the unique fine-grained structure of white matter is.'

If the structure is relatively generic between brains, and doesn't encode identity-crucial info in its microstructure, we may be able to fill it in using data from other brains in the future.

Comment author: Ruzeil 16 September 2015 09:11:44AM 1 point [-]

Since I don't have much academic knowledge on this subject, I appreciate Your feedback a lot. Can I just ask what is Your level of competence in this field?

BR

Comment author: AndreInfante 16 September 2015 09:37:37AM 1 point [-]

Just an enthusiastic amateur who's done a lot of reading. If you're interested in hearing a more informed version of the pro-cryonics argument (and seeing some of the data) I recommend the following links:

On ischemic damage and the no-reflow phenomenon: http://www.benbest.com/cryonics/ischemia.html

Alcor's research on how much data is preserved by their methods: http://www.alcor.org/Library/html/braincryopreservation1.html http://www.alcor.org/Library/html/newtechnology.html http://www.alcor.org/Library/html/CryopreservationAndFracturing.html

Yudkowsky's counter-argument to the philosophical issue of copies vs. "really you": http://lesswrong.com/lw/r9/quantum_mechanics_and_personal_identity/

Comment author: AndreInfante 16 September 2015 08:43:17AM *  7 points [-]

If we could “upload” or roughly simulate any brain, it should be that of C. elegans. Yet even with the full connectome in hand, a static model of this network of connections lacks most of the information necessary to simulate the mind of the worm. In short, brain activity cannot be inferred from synaptic neuroanatomy.

Straw man. Connectonomics is relevant to trying to explain the concept of uploading to the lay-man. Few cryonics proponents actually believe it's all you need to know to reconstruct the brain.

The features of your neurons (and other cells) and synapses that make you “you” are not generic. The vast array of subtle chemical modifications, states of gene regulation, and subcellular distributions of molecular complexes are all part of the dynamic flux of a living brain. These things are not details that average out in a large nervous system; rather, they are the very things that engrams (the physical constituents of memories) are made of.

The fact that someone can be dead for several hours and then be resuscitated, or have their brain substantially heated or cooled without dying, puts a theoretical limit on how sensitive your long-term brain state can possibly be to these sorts of transient details of brain structure. It seem very likely that long-term identity-related brain state is stored almost entirely in relatively stable neurological structures. I don't think this is particularly controversial, neurobiologically.

While it might be theoretically possible to preserve these features in dead tissue, that certainly is not happening now. The technology to do so, let alone the ability to read this information back out of such a specimen, does not yet exist even in principle. It is this purposeful conflation of what is theoretically conceivable with what is ever practically possible that exploits people’s vulnerability.

This is not, to the best of my knowledge, true, and he offers no evidence for this claim. Cryonics does a very good job of preserving a lot of features of brain tissue. There is some damage done by the cryoprotectants and thermal shearing, but it's specific and well-characterized damage, not total structural disruption. Although I will say that ice crystal formation in the deep brain caused by the no-reflow problem is a serious concern. Whether that's a showstopper depends on how important you think the fine-grained structure of white matter is.

But what is this replica? Is it subjectively “you” or is it a new, separate being? The idea that you can be conscious in two places at the same time defies our intuition. Parsimony suggests that replication will result in two different conscious entities. Simulation, if it were to occur, would result in a new person who is like you but whose conscious experience you don’t have access to.

Bad philosophy on top of bad neuroscience!

Comment author: PhilGoetz 26 August 2015 12:53:01AM *  0 points [-]

I like your second argument better. The first, I think, holds no water.

There are basically 2 explanations of morality, the pragmatic and the moral.

By pragmatic I mean the explanation that "moral" acts ultimately are a subset of the acts that increase our utility function. This includes evolutionary psychology, kin selection, and group selection explanations of morality. It also includes most pre-modern in-group/out-group moralities, like Athenian or Roman morality, and Nietzsche's consequentialist "master morality". A key problem with this approach is that if you say something like, "These African slaves seem to be humans rather like me, and we should treat them better," that is a malfunctioning of your morality program that will decrease your genetic utility.

The moral explanation posits that there's a "should" out there in the universe. This includes most modern religious morality, though many old (and contemporary) tribal religions were pragmatic and made practical claims (don't do this or the gods will be angry), not moral ones.

Modern Western humanistic morality can be interpreted either way. You can say the rule not to hurt people is moral, or you can say it's an evolved trait that gives higher genetic payoff.

The idea that we give moral standing to things like humans doesn't work in either approach. If morality is in truth pragmatic, then you'll assign them moral standing if they have enough power for it to be beneficial for you to do so, and otherwise not, regardless of whether they're like humans or not. (Whether or not you know that's what you're doing.) Explanation of morality of pragmatic easily explains the popularity of slavery.

"Moral" morality, from my shoes, seems incompatible with the idea that we assign moral standing to things for looking or thinking like us. I feel no "oughtness" to "we should treat agents different from us like objects." For one thing, it implies racism is morally right, and probably an obligation. For another, it's pretty much exactly what most "moral leaders" have been trying to overcome for the past 2000 years.

It feels to me like what you're doing is starting out by positing morality is pragmatic, and so we expect by default to assign moral status to things like us because that's always a pragmatic thing to do and we've never had to admit moral status to things not like us. Then you extrapolate it into this novel circumstance, in which it might be beneficial to mutually agree with AIs that each of us has moral status. You've already agreed that morals are pragmatic at root, but you are consciously following your own evolved pragmatic programming, which tells you to accept as moral agents things that look like you. So you say, "Okay, I'll just apply my evolved morality program, which I know is just a set of heuristics for increasing my genetic fitness and has no compelling oughtness to it, in this new situation, regardless of the outcome." So you're self-consciously trying to act like an animal that doesn't know its evolved moral program has no oughtness to it. That's really strange.

If you mean that humans are stupid and they'll just apply that evolved heuristic without thinking about it, then that makes sense. But then you're being descriptive. I assumed you were being prescriptive, though that's based on my priors rather than on what you said.

Comment author: AndreInfante 26 August 2015 04:41:31AM 0 points [-]

That's... an odd way of thinking about morality.

I value other human beings, because I value the processes that go on inside my own head, and can recognize the same processes at work in others, thanks to my in-built empathy and theory of the mind. As such, I prefer that good things happen to them rather than bad. There isn't any universal 'shouldness' to it, it's just the way that I'd rather things be. And, since most other humans have similar values, we can work together, arm in arm. Our values converge rather than diverge. That's morality.

I extend that value to those of different races and cultures, because I can see that they embody the same conscious processes that I value. I do not extend that same value to brain dead people, fetuses, or chickens, because I don't see that value present within them. The same goes for a machine that has a very alien cognitive architecture and doesn't implement the cognitive algorithms that I value.

Comment author: Stuart_Armstrong 25 August 2015 10:10:02AM 0 points [-]

I think there's a question of how we create an adequate model of the world

Generally, we don't. A model of the (idealised) computational process of the AI is very simple compared with the real world, and the rest of the model just needs to include enough detail for the problem we're working on.

Comment author: AndreInfante 25 August 2015 09:11:46PM 2 points [-]

But that might be quite a lot of detail!

In the example of curing cancer, your computational model of the universe would need to include a complete model of every molecule of every cell in the human body, and how it interacts under every possible set of conditions. The simpler you make the model, the more you risk cutting off all of the good solutions with your assumptions (or accidentally creation false solutions due to your shortcuts). And that's just for medical questions.

I don't think it's going to be possible for an unaided human to construct a model like that for a very long time, and possibly not ever.

Comment author: PhilGoetz 25 August 2015 01:25:51PM *  -1 points [-]

Doesn't exist? What do you mean by that, and what evidence do you have for believing it? Have you got some special revelation into the moral status of as-yet-hypothetical AIs? Some reason for thinking that it is more likely that beings of superhuman intelligence don't have moral status than that they do?

Comment author: AndreInfante 25 August 2015 09:06:20PM 3 points [-]

The traditional argument is that there's a vast space of possible optimization processes, and the vast majority of them don't have humanlike consciousness or ego or emotions. Thus, we wouldn't assign them human moral standing. AIXI isn't a person and never will be.

A slightly stronger argument is that there's no way in hell we're going to build an AI that has emotions or ego or the ability to be offended by serving others wholeheartedly, because that would be super dangerous, and defeat the purpose of the whole project.

Comment author: PhilGoetz 24 August 2015 08:21:59PM 1 point [-]

I greatly dislike the term "friendly AI". The mechanisms behind "friendly AI" have nothing to do with friendship or mutual benefit. It would be more accurate to call it "slave AI".

Comment author: AndreInfante 24 August 2015 08:32:27PM 3 points [-]

Your lawnmower isn't your slave. "Slave" prejudicially loads the concept with anthrocentric morality that does not actually exist.

Comment author: AndreInfante 24 August 2015 07:58:36PM 2 points [-]

I think there's a question of how we create an adequate model of the world for this idea to work. It's probably not practical to build one by hand, so we'd likely need to hand the task over to an AI.

Might it be possible to use the modelling module of an AI in the absence of the planning module? (or with a weak planning module) If so, you might be able to feed it a great deal of data about the universe, and construct a model that could then be "frozen" and used as the basis for the AI's "virtual universe."

View more: Next