I feel like the dog brain studies are at least fairly strong evidence that quite a bit of information is preserved. The absence of an independent validation is largely down to the poor mainstream perception of cryonics. It's not that Alcor is campaigning to cover up contrary studies - it's that nobody cares enough to do them. Vis-a-vis the use of dogs, there actually aren't that many animals with comparable brain volume to humans. I mean, if you want to find an IRB that'll let you decorticate a giraffe, be my guest. Dogs are a decent analog, under the circ...
Sorry, I probably should have more more specific. What I should really say is 'how important the unique fine-grained structure of white matter is.'
If the structure is relatively generic between brains, and doesn't encode identity-crucial info in its microstructure, we may be able to fill it in using data from other brains in the future.
Just an enthusiastic amateur who's done a lot of reading. If you're interested in hearing a more informed version of the pro-cryonics argument (and seeing some of the data) I recommend the following links:
On ischemic damage and the no-reflow phenomenon: http://www.benbest.com/cryonics/ischemia.html
Alcor's research on how much data is preserved by their methods: http://www.alcor.org/Library/html/braincryopreservation1.html http://www.alcor.org/Library/html/newtechnology.html http://www.alcor.org/Library/html/CryopreservationAndFracturing.html
Yudkowsky's cou...
If we could “upload” or roughly simulate any brain, it should be that of C. elegans. Yet even with the full connectome in hand, a static model of this network of connections lacks most of the information necessary to simulate the mind of the worm. In short, brain activity cannot be inferred from synaptic neuroanatomy.
Straw man. Connectonomics is relevant to trying to explain the concept of uploading to the lay-man. Few cryonics proponents actually believe it's all you need to know to reconstruct the brain.
...The features of your neurons (and other cells)
That's... an odd way of thinking about morality.
I value other human beings, because I value the processes that go on inside my own head, and can recognize the same processes at work in others, thanks to my in-built empathy and theory of the mind. As such, I prefer that good things happen to them rather than bad. There isn't any universal 'shouldness' to it, it's just the way that I'd rather things be. And, since most other humans have similar values, we can work together, arm in arm. Our values converge rather than diverge. That's morality.
I extend that ...
But that might be quite a lot of detail!
In the example of curing cancer, your computational model of the universe would need to include a complete model of every molecule of every cell in the human body, and how it interacts under every possible set of conditions. The simpler you make the model, the more you risk cutting off all of the good solutions with your assumptions (or accidentally creation false solutions due to your shortcuts). And that's just for medical questions.
I don't think it's going to be possible for an unaided human to construct a model like that for a very long time, and possibly not ever.
The traditional argument is that there's a vast space of possible optimization processes, and the vast majority of them don't have humanlike consciousness or ego or emotions. Thus, we wouldn't assign them human moral standing. AIXI isn't a person and never will be.
A slightly stronger argument is that there's no way in hell we're going to build an AI that has emotions or ego or the ability to be offended by serving others wholeheartedly, because that would be super dangerous, and defeat the purpose of the whole project.
Your lawnmower isn't your slave. "Slave" prejudicially loads the concept with anthrocentric morality that does not actually exist.
I think there's a question of how we create an adequate model of the world for this idea to work. It's probably not practical to build one by hand, so we'd likely need to hand the task over to an AI.
Might it be possible to use the modelling module of an AI in the absence of the planning module? (or with a weak planning module) If so, you might be able to feed it a great deal of data about the universe, and construct a model that could then be "frozen" and used as the basis for the AI's "virtual universe."
Have you considered coating your fingers with capsaicin to make scratching your mucus membrances immediately painful?
(Apologies if this advice is unwanted - I have not experienced anything similar, and am just spitballing).
I made serious progress on a system for generating avatar animations based on the motion of a VR headset. It still needs refinement, but I'm extremely proud of what I've got so far.
For Omnivores:
Meat is obviously healthy for individuals. We evolved to eat as much of it as we could get. Many nutrients seem to be very difficult to obtain in sufficient, bio-available form from an all-vegetable diet. I just suspect most observant vegans are substantially malnourished.
On the planet side of things, meat is an environmental disaster. The methane emissions are horrifying, as is the destruction of rainforest. Hopefull...
Technically, it's the frogs and fish that routinely freeze through the winter. Of course, they evolved to pull off that stunt, so it's less impressive.
We've cryopreserved a whole mouse kidney before, and were able to thaw and use it as a mouse's sole kidney.
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2781097/
We've also shown that nematode memory can survive cryopreservation:
The trouble is that larger chunks of tis...
The issue is that crashing the mosquito population doesn't work if even a few of them survive to repopulate - the plan needs indefinite maintenance, and the mosquitoes will eventually evolve to avoid our lab-bred dud males.
I wonder if you could breed a version of the mosquito that's healthy but has an aversion to humans, make your genetic change dominant, and then release a bunch of THOSE mosquitoes. There'd be less of a fitness gap between the modified mosquitoes and the original species, so if we just kept dumping modified males every year for a decade or two, we might be able to completely drive the original human-seeking genes out of the ecosystem.
Not much to add here, except that it's unlikely that Alex is an exceptional example of a parrot. The researcher purchased him from a pet store at random to try to eliminate that objection.
Interesting! I didn't know that, and that makes a lot of sense.
If I were to restate my objection more strongly, I'd say that parrots also seem to exceed chimps in language capabilities (chimps having six billion cortical neurons). The reason I didn't bring this up originally is that chimp language research is a horrible, horrible field full of a lot of bad science, so it's difficult to be too confident in that result.
Plenty of people will tell you that signing chimps are just as capable as Alex the parrot - they just need a little bit of interpretation f...
Yes it is a pure ANN - according to my use of the term ANN (arguing over definitions is a waste of time). ANNs are fully general circuit models, which obviously can re-implement any module from any computer - memory, database, whatever. The defining characteristics of an ANN are - simulated network circuit structure based on analog/real valued nodes, and some universal learning algorithm over the weights - such as SGD.
I think you misunderstood me. The current DeepMind AI that they've shown the public is a pure ANN. However, it has serious limitations be...
First off the bat, you absolutely can create an AGI that is a pure ANN. In fact the most successful early precursor AGI we have - the atari deepmind agent - is a pure ANN. Your claim that ANNs/Deep Learning is not the end of all AGI research is quickly becoming a minority position.
The deepmind agent has no memory, one of the problems that I noted in the first place with naive ANN systems. The deepmind's team's solution to this is the neural Turing machine model, which is a hybrid system between a neural network and a database. It's not a pure ANN. It is...
Yes, I've read your big universal learner post, and I'm not convinced. This does seem to be the crux of our disagreement, so let me take some time to rebut:
First off, you're seriously misrepresenting the success of deep learning as support for your thesis. Deep learning algorithms are extremely powerful, and probably have a role to play in building AGI, but they aren't the end-all, be-all of AI research. For starters, modern deep learning systems are absolutely fine-tuned to the task at hand. You say that they have only "a small number of hyperparamet...
I seriously doubt that. Plenty of humans want to kill everyone (or, at least, large groups of people). Very few succeed. These agents would be a good deal less capable.
So, to sum up, your plan is to create an arbitrarily safe VM, and use it to run brain-emulation-style denovo AIs patterned on human babies (presumably with additional infrastructure to emulate the hard-coded changes that occur in the brain during development to adulthood: adult humans are not babies + education). You then want to raise many, many iterations of these things under different conditions to try to produce morally superior specimens, then turn those AIs loose and let them self modify to godhood.
Is that accurate? (Seriously, let me know if I'm mi...
A ULM also requires a utility function or reward circuitry with some initial complexity, but we can also use the same universal learning algorithms to learn that component. It is just another circuit, and we can learn any circuit that evolution learned.
Okay, so we just have to determine human terminal values in detail, and plug them into a powerful maximizer. I'm not sure I see how that's different from the standard problem statement for friendly AI. Learning values by observing people is exactly what MIRI is working on, and it's not a trivial problem.
F...
Here's one from a friend of mine. It's not exactly an argument against AI risk, but it is an argument that the problem may be less urgent than it's traditionally presented.
There's plenty of reason to believe that Moore's Law will slow down in the near future
Progress on AI algorithms has historically been rather slow.
AI programming is an extremely high level cognitive task, and will likely be among the hardest things to get an AI to do.
These three things together suggest that there will be a 'grace period' between the development of general agents
(1) Intelligence is an extendible method that enables software to satisfy human preferences. (2) If human preferences can be satisfied by an extendible method, humans have the capacity to extend the method. (3) Extending the method that satisfies human preferences will yield software that is better at satisfying human preferences. (4) Magic happens. (5) There will be software that can satisfy all human preferences perfectly but which will instead satisfy orthogonal preferences, causing human extinction.
This is deeply silly. The thing about arguing from ...
I think you misunderstand my argument. The point is that it's ridiculous to say that human beings are 'universal learning machines' and you can just raise any learning algorithm as a human child and it'll turn out fine. We can't even raise 2-5% of HUMAN CHILDREN as human children and have it reliably turn out okay.
Sociopaths are different from baseline humans by a tiny degree. It's got to be a small number of single-gene mutations. A tiny shift in information. And that's all it takes to make them consistently UnFriendly, regardless of how well they're rai...
To rebut: sociopaths exist.
What are the advantages to the hybrid approach as compared to traditional cryonics? Histological preservation? Thermal cracking? Toxicity?
Thank you!
That sounds fascinating. Could you link to some non-paywalled examples?
The odds aren't good, but here's hoping.
Amusingly, I just wrote an (I think better) article about the same thing.
http://www.makeuseof.com/tag/heres-scientists-think-worried-artificial-intelligence/
Business Insider can probably muster more attention than I can though, so it's a tossup about who's actually being more productive here.
According to the PM I got, I had the most credible vegetarian entry, and it was ranked as much more credible than my actual (meat-eating) beliefs. I'm not sure how I feel about that.