AndreInfante
AndreInfante has not written any posts yet.

AndreInfante has not written any posts yet.

I feel like the dog brain studies are at least fairly strong evidence that quite a bit of information is preserved. The absence of an independent validation is largely down to the poor mainstream perception of cryonics. It's not that Alcor is campaigning to cover up contrary studies - it's that nobody cares enough to do them. Vis-a-vis the use of dogs, there actually aren't that many animals with comparable brain volume to humans. I mean, if you want to find an IRB that'll let you decorticate a giraffe, be my guest. Dogs are a decent analog, under the circumstances. They're not so much smaller you'd expect drastically different results.
In any case,... (read more)
Sorry, I probably should have more more specific. What I should really say is 'how important the unique fine-grained structure of white matter is.'
If the structure is relatively generic between brains, and doesn't encode identity-crucial info in its microstructure, we may be able to fill it in using data from other brains in the future.
Just an enthusiastic amateur who's done a lot of reading. If you're interested in hearing a more informed version of the pro-cryonics argument (and seeing some of the data) I recommend the following links:
On ischemic damage and the no-reflow phenomenon: http://www.benbest.com/cryonics/ischemia.html
Alcor's research on how much data is preserved by their methods: http://www.alcor.org/Library/html/braincryopreservation1.html http://www.alcor.org/Library/html/newtechnology.html http://www.alcor.org/Library/html/CryopreservationAndFracturing.html
Yudkowsky's counter-argument to the philosophical issue of copies vs. "really you": http://lesswrong.com/lw/r9/quantum_mechanics_and_personal_identity/
If we could “upload” or roughly simulate any brain, it should be that of C. elegans. Yet even with the full connectome in hand, a static model of this network of connections lacks most of the information necessary to simulate the mind of the worm. In short, brain activity cannot be inferred from synaptic neuroanatomy.
Straw man. Connectonomics is relevant to trying to explain the concept of uploading to the lay-man. Few cryonics proponents actually believe it's all you need to know to reconstruct the brain.
... (read more)The features of your neurons (and other cells) and synapses that make you “you” are not generic. The vast array of subtle chemical modifications, states of gene regulation,
That's... an odd way of thinking about morality.
I value other human beings, because I value the processes that go on inside my own head, and can recognize the same processes at work in others, thanks to my in-built empathy and theory of the mind. As such, I prefer that good things happen to them rather than bad. There isn't any universal 'shouldness' to it, it's just the way that I'd rather things be. And, since most other humans have similar values, we can work together, arm in arm. Our values converge rather than diverge. That's morality.
I extend that value to those of different races and cultures, because I can see that they embody the same conscious processes that I value. I do not extend that same value to brain dead people, fetuses, or chickens, because I don't see that value present within them. The same goes for a machine that has a very alien cognitive architecture and doesn't implement the cognitive algorithms that I value.
But that might be quite a lot of detail!
In the example of curing cancer, your computational model of the universe would need to include a complete model of every molecule of every cell in the human body, and how it interacts under every possible set of conditions. The simpler you make the model, the more you risk cutting off all of the good solutions with your assumptions (or accidentally creation false solutions due to your shortcuts). And that's just for medical questions.
I don't think it's going to be possible for an unaided human to construct a model like that for a very long time, and possibly not ever.
The traditional argument is that there's a vast space of possible optimization processes, and the vast majority of them don't have humanlike consciousness or ego or emotions. Thus, we wouldn't assign them human moral standing. AIXI isn't a person and never will be.
A slightly stronger argument is that there's no way in hell we're going to build an AI that has emotions or ego or the ability to be offended by serving others wholeheartedly, because that would be super dangerous, and defeat the purpose of the whole project.
Your lawnmower isn't your slave. "Slave" prejudicially loads the concept with anthrocentric morality that does not actually exist.
I think there's a question of how we create an adequate model of the world for this idea to work. It's probably not practical to build one by hand, so we'd likely need to hand the task over to an AI.
Might it be possible to use the modelling module of an AI in the absence of the planning module? (or with a weak planning module) If so, you might be able to feed it a great deal of data about the universe, and construct a model that could then be "frozen" and used as the basis for the AI's "virtual universe."
According to the PM I got, I had the most credible vegetarian entry, and it was ranked as much more credible than my actual (meat-eating) beliefs. I'm not sure how I feel about that.