All of AndreInfante's Comments + Replies

According to the PM I got, I had the most credible vegetarian entry, and it was ranked as much more credible than my actual (meat-eating) beliefs. I'm not sure how I feel about that.

2Raelifin
Impostor entries were generally more convincing than genuine responses. I chalk this up to impostors trying harder to convince judges. But who knows? Maybe you were a vegetarian in a past life! ;)

I feel like the dog brain studies are at least fairly strong evidence that quite a bit of information is preserved. The absence of an independent validation is largely down to the poor mainstream perception of cryonics. It's not that Alcor is campaigning to cover up contrary studies - it's that nobody cares enough to do them. Vis-a-vis the use of dogs, there actually aren't that many animals with comparable brain volume to humans. I mean, if you want to find an IRB that'll let you decorticate a giraffe, be my guest. Dogs are a decent analog, under the circ... (read more)

3V_V
Dog brains are 20 times smaller than human brains: 70 g vs 1,300 - 1,400 g. Given the square-cube law, this means that dog brains have a much higher surface to mass ratio, therefore they can be cooled faster without cracking and using a lower concentration of cryoprotectant. (Cow brains, on the other hand, are just 3 times smaller than human brains and about the same size of chimp brains, hence you don't need to experiment on an exotic animal to get more comparable results). And we don't know how much information was preserved anyway. And the only studies being made by an organization that has ideological and financial stakes in the outcome is a big problem. As far as we know, they could have selected the best micrographs, hiding under the rug those that showed substantial damage. Cryonics organization should encourage independent replication instead or playing the victim. The default position is that cryonics doesn't work. It's the proponents that have the burden of providing evidence that it works. ALCOR dog brain studies are weak for the aforementioned reasons.

Sorry, I probably should have more more specific. What I should really say is 'how important the unique fine-grained structure of white matter is.'

If the structure is relatively generic between brains, and doesn't encode identity-crucial info in its microstructure, we may be able to fill it in using data from other brains in the future.

Just an enthusiastic amateur who's done a lot of reading. If you're interested in hearing a more informed version of the pro-cryonics argument (and seeing some of the data) I recommend the following links:

On ischemic damage and the no-reflow phenomenon: http://www.benbest.com/cryonics/ischemia.html

Alcor's research on how much data is preserved by their methods: http://www.alcor.org/Library/html/braincryopreservation1.html http://www.alcor.org/Library/html/newtechnology.html http://www.alcor.org/Library/html/CryopreservationAndFracturing.html

Yudkowsky's cou... (read more)

0V_V
Yudkowsky's counter-argument is a counter-argument to a straw man, since I don't think anybody ever argued in modern times that personal identity is linked to a specific set of individual atoms. Everybody knows that atoms in the brain are constantly replaced.
7anon85
You might be interested in Aaronson's proposed theory for why it might be physically impossible to copy a human brain. He outlined it in "The Ghost in the Quantum Turing Machine": http://arxiv.org/abs/1306.0159 In that essay he discusses a falsifiable theory of the brain that, if true, would mean brain states are un-copyable. So Yudkowsky's counter-argument may be a little too strong: it is indeed consistent with modern physics for brain simulation to be impossible.

If we could “upload” or roughly simulate any brain, it should be that of C. elegans. Yet even with the full connectome in hand, a static model of this network of connections lacks most of the information necessary to simulate the mind of the worm. In short, brain activity cannot be inferred from synaptic neuroanatomy.

Straw man. Connectonomics is relevant to trying to explain the concept of uploading to the lay-man. Few cryonics proponents actually believe it's all you need to know to reconstruct the brain.

The features of your neurons (and other cells)

... (read more)
7V_V
I don't think so, Cryonics is predicated upon the hypothesis that the fine structural details which probably can't be preserved with current methods are not important to reconstruct personal identity. I don't think that substantial heating is survivable. Where did you get that information? Anyway, the type of disruptions that occur to brain tissue during cryopreservation (hours-long warm ischemia, cryoprotectant damage and ice crystal formation and thermal shearing) are very different than those which occur in all known survivable events. Warm ischemia can be reduced with prompt cryopreservation (but even in the highly publicized case of Kim Suozzi, where death was expected and a lot of preparation took place, they still couldn't avoid it), it's unclear how much ice crystal formation can be reduced (it's believed to also depend on cryopreservation promptness, but that's more speculative) and cryoprotectant damage and thermal shearing are currently unavoidable. You are reversing the burden of evidence. It's the cryonics supporters that have to provide evidence that cryonics preserves relevant brain tissue features. To my knowledge, this evidence seems to be scarce or absent. AFAIK, no cryopreserved human brain, or a brain of comparable size, was ever analyzed. There were some studies done by ALCOR on dog brains, but these were never replicated by independent researchers. Dog brains, anyway, are smaller and hence easier to vitrify than human brains.
4[anonymous]
Considering that damaging large amounts of white matter gives you things like lobotomy and alien hand syndrome, or sensorimotor impairment, and that subcortical structures are vitally important...
2Ruzeil
Since I don't have much academic knowledge on this subject, I appreciate Your feedback a lot. Can I just ask what is Your level of competence in this field? BR

That's... an odd way of thinking about morality.

I value other human beings, because I value the processes that go on inside my own head, and can recognize the same processes at work in others, thanks to my in-built empathy and theory of the mind. As such, I prefer that good things happen to them rather than bad. There isn't any universal 'shouldness' to it, it's just the way that I'd rather things be. And, since most other humans have similar values, we can work together, arm in arm. Our values converge rather than diverge. That's morality.

I extend that ... (read more)

0PhilGoetz
If you're describing how you expect you'd act based on your feelings, then why do their algorithms matter? I would think your feelings would respond to their appearance and behavior. There's a very large space of possible algorithms, but the space of reasonable behaviors given the same circumstances is quite small. Humans, being irrational, often deviate bizarrely from the behavior I expect in a given circumstance--more so than any AI probably would.

But that might be quite a lot of detail!

In the example of curing cancer, your computational model of the universe would need to include a complete model of every molecule of every cell in the human body, and how it interacts under every possible set of conditions. The simpler you make the model, the more you risk cutting off all of the good solutions with your assumptions (or accidentally creation false solutions due to your shortcuts). And that's just for medical questions.

I don't think it's going to be possible for an unaided human to construct a model like that for a very long time, and possibly not ever.

0Stuart_Armstrong
Indeed (see my comment on the problem with simplified model being unsolved). However, it's a different kind of problem to standard FAI (it's "simply" a question of getting a precise enough model, and not a philosophically open problem), and there are certainly simpler versions that are tractable.

The traditional argument is that there's a vast space of possible optimization processes, and the vast majority of them don't have humanlike consciousness or ego or emotions. Thus, we wouldn't assign them human moral standing. AIXI isn't a person and never will be.

A slightly stronger argument is that there's no way in hell we're going to build an AI that has emotions or ego or the ability to be offended by serving others wholeheartedly, because that would be super dangerous, and defeat the purpose of the whole project.

0PhilGoetz
I like your second argument better. The first, I think, holds no water. There are basically 2 explanations of morality, the pragmatic and the moral. By pragmatic I mean the explanation that "moral" acts ultimately are a subset of the acts that increase our utility function. This includes evolutionary psychology, kin selection, and group selection explanations of morality. It also includes most pre-modern in-group/out-group moralities, like Athenian or Roman morality, and Nietzsche's consequentialist "master morality". A key problem with this approach is that if you say something like, "These African slaves seem to be humans rather like me, and we should treat them better," that is a malfunctioning of your morality program that will decrease your genetic utility. The moral explanation posits that there's a "should" out there in the universe. This includes most modern religious morality, though many old (and contemporary) tribal religions were pragmatic and made practical claims (don't do this or the gods will be angry), not moral ones. Modern Western humanistic morality can be interpreted either way. You can say the rule not to hurt people is moral, or you can say it's an evolved trait that gives higher genetic payoff. The idea that we give moral standing to things like humans doesn't work in either approach. If morality is in truth pragmatic, then you'll assign them moral standing if they have enough power for it to be beneficial for you to do so, and otherwise not, regardless of whether they're like humans or not. (Whether or not you know that's what you're doing.) Explanation of morality of pragmatic easily explains the popularity of slavery. "Moral" morality, from my shoes, seems incompatible with the idea that we assign moral standing to things for looking or thinking like us. I feel no "oughtness" to "we should treat agents different from us like objects." For one thing, it implies racism is morally right, and probably an obligation. For another, it's pre

Your lawnmower isn't your slave. "Slave" prejudicially loads the concept with anthrocentric morality that does not actually exist.

-1PhilGoetz
Doesn't exist? What do you mean by that, and what evidence do you have for believing it? Have you got some special revelation into the moral status of as-yet-hypothetical AIs? Some reason for thinking that it is more likely that beings of superhuman intelligence don't have moral status than that they do?
0Tem42
Useful AI.

I think there's a question of how we create an adequate model of the world for this idea to work. It's probably not practical to build one by hand, so we'd likely need to hand the task over to an AI.

Might it be possible to use the modelling module of an AI in the absence of the planning module? (or with a weak planning module) If so, you might be able to feed it a great deal of data about the universe, and construct a model that could then be "frozen" and used as the basis for the AI's "virtual universe."

0Stuart_Armstrong
Generally, we don't. A model of the (idealised) computational process of the AI is very simple compared with the real world, and the rest of the model just needs to include enough detail for the problem we're working on.

Have you considered coating your fingers with capsaicin to make scratching your mucus membrances immediately painful?

(Apologies if this advice is unwanted - I have not experienced anything similar, and am just spitballing).

2Lumifer
Oh, dear. Please don't try this at home. You're likely to end up with a pain rating of 10 on this guy's scale.
3[anonymous]
Actually I really appreciate that suggestion! I hadn't considered it. I always appreciate recommendations! It may also be useful so that I rub my eyes less. My optometrist said to avoid that since I'm losing my 3D perception. As of the end of the sentence, I intend to look up capsaicin and similarly purposed agents, and the appropriate dosage and application. Then I'll look up safety considerations if the information is indicative, and availability.

I made serious progress on a system for generating avatar animations based on the motion of a VR headset. It still needs refinement, but I'm extremely proud of what I've got so far.

https://www.youtube.com/watch?v=klAsxamqkkU

For Omnivores:

  • Do you think the level of meat consumption in America is healthy for individuals? Do you think it's healthy for the planet?

Meat is obviously healthy for individuals. We evolved to eat as much of it as we could get. Many nutrients seem to be very difficult to obtain in sufficient, bio-available form from an all-vegetable diet. I just suspect most observant vegans are substantially malnourished.

On the planet side of things, meat is an environmental disaster. The methane emissions are horrifying, as is the destruction of rainforest. Hopefull... (read more)

0Raelifin
Yikes. If all responses are this good, I'm sure the judges will have a rough time! Thanks so much for your words. At some point you'll need to PM me with a description of your actual beliefs so I can give feedback to the judges and see how you do.

Technically, it's the frogs and fish that routinely freeze through the winter. Of course, they evolved to pull off that stunt, so it's less impressive.

We've cryopreserved a whole mouse kidney before, and were able to thaw and use it as a mouse's sole kidney.

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2781097/

We've also shown that nematode memory can survive cryopreservation:

http://www.dailymail.co.uk/sciencetech/article-3107805/Could-brains-stay-forever-young-Memories-survive-cryogenic-preservation-study-shows.html

The trouble is that larger chunks of tis... (read more)

0SeekingEternity
Nitpick: The article talks about a rabbit kidney, not a mouse one It also isn't entirely clear how cold the kidney got, or how long it was stored. It's evidence in favor of "at death" cryonics, but I'm not sure how strong of evidence it is. Also, it's possible to survive with substantially more kidney damage than you would even want to incur as brain damage.

The issue is that crashing the mosquito population doesn't work if even a few of them survive to repopulate - the plan needs indefinite maintenance, and the mosquitoes will eventually evolve to avoid our lab-bred dud males.

I wonder if you could breed a version of the mosquito that's healthy but has an aversion to humans, make your genetic change dominant, and then release a bunch of THOSE mosquitoes. There'd be less of a fitness gap between the modified mosquitoes and the original species, so if we just kept dumping modified males every year for a decade or two, we might be able to completely drive the original human-seeking genes out of the ecosystem.

0HungryHobo
Or engineer mosquitos which are allergic or immune to the malaria parasite and release huge numbers of those. Still, once you've done the expensive bit of engineering mosquito to produce sterile offspring breeding a lot of them is the cheap part and crashing the population, even temporarily goes a long way towards wiping out malaria in an area since it needs a certain critical mass to spread.
0ChristianKl
Why do you think mosquitos that carry malaria didn't repopulate the US and Europe after they were first driven out?
0Douglas_Knight
The proposal is not releasing dud males. The proposal is to use genetic drive, and the post included a link for people who have never heard of it. Most mosquito populations are very specific in what hosts they parasitize, so removing humans from the list is not an option. Humans might be the only host!

Not much to add here, except that it's unlikely that Alex is an exceptional example of a parrot. The researcher purchased him from a pet store at random to try to eliminate that objection.

Interesting! I didn't know that, and that makes a lot of sense.

If I were to restate my objection more strongly, I'd say that parrots also seem to exceed chimps in language capabilities (chimps having six billion cortical neurons). The reason I didn't bring this up originally is that chimp language research is a horrible, horrible field full of a lot of bad science, so it's difficult to be too confident in that result.

Plenty of people will tell you that signing chimps are just as capable as Alex the parrot - they just need a little bit of interpretation f... (read more)

3jacob_cannell
I'd strongly suggest the movie project nim, if you haven't seen it. In some respects chimpanzee intelligence develops faster than that of a human child, but it also planes off much earlier. Their childhood development period is much shorter. To first approximation, general intelligence in animals can be predicted by number of neurons/synapses in general learning modules, but this isn't the only factor. I don't have an exact figure, but that poster article suggests parrots have perhaps 1-3 billion ish cortical neuron equivalent. The next most important factor is probably degree of neotany or learning window. Human intelligence develops over the span of 20 years. Parrots seem exceptional in terms of lifespan and are thus perhaps more human like - where they maintain a childlike state for much longer. We know from machine learning that the 'learning rate' is a super important hyperparameter - learning faster has a huge advantage, but if you learn too fast you get inferior long term results for your capacity. Learning slowly is obviously more costly, but it can generate more efficient circuits in the long term. I inferred/guessed that parrots have very long neotenic learning windows, and the articles on Alex seem to confirm this. Alex reached a vocabulary of about 100 words by age 29, a few year's before his untimely death. The trainer - Irene Pepperberg - claims that Alex was still learning and had not reached peak capability. She rated Alex's intelligence as roughly equivalent to that of a 5 year old. This about makes sense if the parrot has roughly 1/6th our number of cortical neurons, but has similar learning efficiency and long learning window. To really compare chimp vs parrot learning ability, we'd need more than a handful of samples. There is also a large selection effect here - because parrots make reasonably good pets, whereas chimps are terrible dangerous pets. So we haven't tested chimps as much. Alex is more likely to be a very bright parrot, whereas t

Yes it is a pure ANN - according to my use of the term ANN (arguing over definitions is a waste of time). ANNs are fully general circuit models, which obviously can re-implement any module from any computer - memory, database, whatever. The defining characteristics of an ANN are - simulated network circuit structure based on analog/real valued nodes, and some universal learning algorithm over the weights - such as SGD.

I think you misunderstood me. The current DeepMind AI that they've shown the public is a pure ANN. However, it has serious limitations be... (read more)

0jacob_cannell
So you are claiming that either you already understood AI/AGI completely when you arrived to LW, or you updated on LW/MIRI writings because they are 'reputable' - even though their positions are disavowed or even ridiculed by many machine learning experts. I replied here, and as expected - it looks like you are factually mistaken in your assertion that disagreed with the ULH. Better yet, the outcome of your cat vs bird observation was correctly predicted by the ULH, so that's yet more evidence in its favor.

First off the bat, you absolutely can create an AGI that is a pure ANN. In fact the most successful early precursor AGI we have - the atari deepmind agent - is a pure ANN. Your claim that ANNs/Deep Learning is not the end of all AGI research is quickly becoming a minority position.

The deepmind agent has no memory, one of the problems that I noted in the first place with naive ANN systems. The deepmind's team's solution to this is the neural Turing machine model, which is a hybrid system between a neural network and a database. It's not a pure ANN. It is... (read more)

1jacob_cannell
Yes it is a pure ANN - according to my use of the term ANN (arguing over definitions is a waste of time). ANNs are fully general circuit models, which obviously can re-implement any module from any computer - memory, database, whatever. The defining characteristics of an ANN are - simulated network circuit structure based on analog/real valued nodes, and some universal learning algorithm over the weights - such as SGD. You don't understand my position. I don't believe DL as it exists today is somehow the grail of AI. And yes I'm familiar with Hinton's 'Capsule' proposals. And yes I agree there is still substantial room for improvement in ANN microarchitecture, and especially for learning invariances - and unsupervised especially. For any theory of anything the brain does - if it isn't grounded in computational neuroscience data, it is probably wrong - mainstream or not. You don't update on forum posts? Really? You seem pretty familiar with MIRI and LW positions. So are you saying that you arrived at those positions all on your own somehow? Then you just showed up here, thankfully finding other people who just happened to have arrived at all the same ideas?

Yes, I've read your big universal learner post, and I'm not convinced. This does seem to be the crux of our disagreement, so let me take some time to rebut:

First off, you're seriously misrepresenting the success of deep learning as support for your thesis. Deep learning algorithms are extremely powerful, and probably have a role to play in building AGI, but they aren't the end-all, be-all of AI research. For starters, modern deep learning systems are absolutely fine-tuned to the task at hand. You say that they have only "a small number of hyperparamet... (read more)

2jacob_cannell
Cat brains are much larger, but physical size is irrelevant. What matters is neuron/synapse count. According to my ULM theory - the most likely explanation for the superior learning ability of parrots is a larger number of neurons/synapses in their general learning modules - (whatever the equivalent of the cortex is in birds) and thus more computational power available for general learning. Stop right now, and consider this bet - I will bet that parrots have more neurons/synapses in their cortex-equivalent brain regions than cats. Now a little google searching leads to this blog article which summarizes this recent research - Complex brains for complex cognition - neuronal scaling rules for bird brains, From the abstract: The telencephalon is believed to be the equivalent of the cortex in birds. The cortex of the smallest monkeys have about 400 million neurons, whereas the cat's cortex has about 300 million neurons. A medium sized monkey such as a night monkey has more than 1 billion cortical neurons.
-2jacob_cannell
Do you actually believe that evolved modularity is a better explanation of the brain then the ULM hypothesis? Do you have evidence for this belief or is it simply that which you want to be true? Do you understand why the computational neuroscience and machine learning folks are moving away from the latter towards the former? If you do have evidence please provide it in a critique in the comments for that post where I will respond. Make some specific predictions for the next 5 years about deep learning or ANNs. Let us see if we actually have significant differences of opinion. If so I expect to dominate you in any prediction market or bets concerning the near term future of AI. First off the bat, you absolutely can create an AGI that is a pure ANN. In fact the most successful early precursor AGI we have - the atari deepmind agent - is a pure ANN. Your claim that ANNs/Deep Learning is not the end of all AGI research is quickly becoming a minority position. What the scottsman! Done and done. Next! I discussed this in the comments - it absolutely does explain neurotypical standardization. It's a result of topographic/geometric wiring optimization. There is an exactly optimal location for every piece of functionality, and the brain tends to find those same optimal locations in each human. But if you significantly perturb the input sense or the brain geometry, you can get radically different results. Consider the case of extreme hydrocephaly - where fluid fills in the center of the brain and replaces most of the brain and squeezes the remainder out to a thin surface near the skull. And yet, these patients can have above average IQs. Optimal dynamic wiring can explain this - the brain is constantly doing global optimization across the wiring structure, adapting to even extreme deformations and damage. How does evolved modularity explain this? This is nonsense - language processing develops in general purpose cortical modules, there is no specific language circuitry.

I seriously doubt that. Plenty of humans want to kill everyone (or, at least, large groups of people). Very few succeed. These agents would be a good deal less capable.

1turchin
Just imagine a Stuxnet-style computer virus which will find DNA-synthesisers and print different viruses on each of them, calculating exact DNA mutations for hundreds different flu strains.

So, to sum up, your plan is to create an arbitrarily safe VM, and use it to run brain-emulation-style denovo AIs patterned on human babies (presumably with additional infrastructure to emulate the hard-coded changes that occur in the brain during development to adulthood: adult humans are not babies + education). You then want to raise many, many iterations of these things under different conditions to try to produce morally superior specimens, then turn those AIs loose and let them self modify to godhood.

Is that accurate? (Seriously, let me know if I'm mi... (read more)

4jacob_cannell
No. I said: I used brain emulations as analogy to help aid your understanding. Because unless you have deep knowledge of machine learning and computational neuroscience, there are huge inferential distances to cross. Yes we are. I have made a detailed, extensive, citation-full, and well reviewed case that human minds are just that. All of our understanding about the future of AGI is based ultimately on our models of the brain and AI in general. I am claiming that the MIRI viewpoint is based on an outdated model of the brain, and a poor understanding of the limits of computation and intelligence. I will summarize for one last time. I will then no longer repeat myself because it is not worthy of my time - any time spent arguing this is better spent preparing another detailed article, rather than a little comment. There is extensive uncertainty concerning how the brain works and what types of future AI are possible in practice. In situations of such uncertainty, any good sane probabilistic reasoning agent should come up with a multimodal distribution that spreads belief across several major clusters. If your understanding of AI comes mainly from reading LW - you are probably biased beyond hope. I'm sorry, but this is true. You are stuck in box and don't even know it. Here are the main key questions that lead to different belief clusters: * Are the brain's algorithms for intelligence complex or simple? * And related - are human minds mainly software or mainly hardware? * At the practical computational level, does the brain implement said algorithms efficiently or not? If the human mind is built out of a complex mess of hardware specific circuits, and the brain is far from efficient, than there is little to learn from the brain. This is Yudkowsky/MIRI's position. This viewpoint leads to a focus on pure math and avoidance of anything brain-like (such as neural nets). In this viewpoint hard takeoff is likely, AI is predicted to be nothing like human minds, etc.

A ULM also requires a utility function or reward circuitry with some initial complexity, but we can also use the same universal learning algorithms to learn that component. It is just another circuit, and we can learn any circuit that evolution learned.

Okay, so we just have to determine human terminal values in detail, and plug them into a powerful maximizer. I'm not sure I see how that's different from the standard problem statement for friendly AI. Learning values by observing people is exactly what MIRI is working on, and it's not a trivial problem.

F... (read more)

2[anonymous]
Why do you even go around thinking that the concept of "terminal values", which is basically just a consequentialist steelmanning Aristotle, cuts reality at the joints? That part honestly isn't that hard once you read the available literature about paradox theorems.
0jacob_cannell
No - not at all. Perhaps you have read too much MIRI material, and not enough of the neuroscience and machine learning I referenced. An infant is not born with human 'terminal values'. It is born with some minimal initial reward learning circuitry to bootstrap learning of complex values from adults. Stop thinking of AGI as some wierd mathy program. Instead think of brain emulations - and then you have obvious answers to all of these questions. You apparently didn't read my article or links to earlier discussion? We can easily limit the capability of minds by controlling knowledge. A million smart evil humans is dangerous - but only if they have modern knowledge. If they have only say medieval knowledge, they are hardly dangerous. Also - they don't realize they are in a sim. Also - the point of the sandbox sims is to test architectures, reward learning systems, and most importantly - altruism. Designs that work well in these safe sims are then copied into less safe sims and finally the real world. Consider the orthogonality thesis - AI of any intelligence level can be combined with any values. Thus we can test values on young/limited AI before scaling up their power. Sandbox sims can be arbitrarily safe. It is the only truly practical workable proposal to date. It is also the closest to what is already used in industry. Thus it is the solution by default. Ridiculous nonsense. Many humans today are aware of the sim argument. The gnostics were aware in some sense 2,000 years ago. Do you think any of them broke out? Are you trying to break out? How? Again, stop thinking we create a single AI program and then we are done. It will be a largescale evolutionary process, with endless selection, testing, and refinement. We can select for super altruistic moral beings - like bhudda/gandhi/jesus level. We can take the human capability for altruism, refine it, and expand on it vastly. Quixotic waste of time.

Here's one from a friend of mine. It's not exactly an argument against AI risk, but it is an argument that the problem may be less urgent than it's traditionally presented.

  1. There's plenty of reason to believe that Moore's Law will slow down in the near future

  2. Progress on AI algorithms has historically been rather slow.

  3. AI programming is an extremely high level cognitive task, and will likely be among the hardest things to get an AI to do.

  4. These three things together suggest that there will be a 'grace period' between the development of general agents

... (read more)
2Gram_Stone
There are parts that are different, but it seems worth mentioning that this is quite similar to certain forms of Bostrom's second-guessing arguments, as discussed in Chapter 14 of Superintelligence and in Technological Revolutions: Ethics and Policy in the Dark: I should mention that he does seem to be generally against attempting to manipulate people into doing the best thing.
3turchin
Dumb agent could also cause human extinction. "To kill all humans" is computationly simpler task than to create superintelligence. And it may be simplier by many orders of magnitude.

(1) Intelligence is an extendible method that enables software to satisfy human preferences. (2) If human preferences can be satisfied by an extendible method, humans have the capacity to extend the method. (3) Extending the method that satisfies human preferences will yield software that is better at satisfying human preferences. (4) Magic happens. (5) There will be software that can satisfy all human preferences perfectly but which will instead satisfy orthogonal preferences, causing human extinction.

This is deeply silly. The thing about arguing from ... (read more)

I think you misunderstand my argument. The point is that it's ridiculous to say that human beings are 'universal learning machines' and you can just raise any learning algorithm as a human child and it'll turn out fine. We can't even raise 2-5% of HUMAN CHILDREN as human children and have it reliably turn out okay.

Sociopaths are different from baseline humans by a tiny degree. It's got to be a small number of single-gene mutations. A tiny shift in information. And that's all it takes to make them consistently UnFriendly, regardless of how well they're rai... (read more)

1jacob_cannell
No - it is not. See the article for the in depth argument and citations backing up this statement. Well almost - A ULM also requires a utility function or reward circuitry with some initial complexity, but we can also use the same universal learning algorithms to learn that component. It is just another circuit, and we can learn any circuit that evolution learned. Sure - which is why I discussed sim sandbox testing. Did you read about my sim sandbox idea? We test designs in a safe sandbox sim, and we don't copy sociopaths. No, this isn't obvious at all. AGI is going to be built from the same principles as the brain - because the brain is a universal learning machine. The AGI's mind structure will be learned from training and experiential data such that the AI learns how to think like humans and learns how to be human - just like humans do. Human minds are software constructs - without that software we would just be animals (feral humans). An artificial brain is just another computer that can run the human mind software. Yes, but it's only a part of the brain and a fraction of the brain's complexity, so obviously it can't be harder than reverse engineering the whole brain.

To rebut: sociopaths exist.

3jacob_cannell
Super obvious re-rebut: sociopaths exist, and yet civilization endures. Also, we can rather obviously test in safe simulation sandboxes and avoid copying sociopaths. The argument that sociopaths are a fundemental showstopper must be based then on some magical view of the brain (because obviously evolution succeeds in producing non sociopaths, so we can copy its techniques if they are nonmagical). Remember the argument is against existential threat level UFAI, not some fraction of evil AIs in a large population.
4David_Bolin
That is not a useful rebuttal if in fact it is impossible to guarantee that your AGI will not be a socialpath no matter how you program it. Eliezer's position generally is that we should make sure everything is set in advance. Jacob_cannell seems to be basically saying that much of an AGI's behavior is going to be determined by its education, environment, and history, much as is the case with human beings now. If this is the case it is unlikely there is any way to guarantee a good outcome, but there are ways to make that outcome more likely.

What are the advantages to the hybrid approach as compared to traditional cryonics? Histological preservation? Thermal cracking? Toxicity?

0Andy_McKenzie
As I understand it, the major advantage is that doing the cross-linking first (e.g. w glutaraldehyde) saves you time and maintains blood vessels so that traditional cryoprotectants can diffuse more widely across brain tissue. It also may allow easier validation of the cryopreservation protocol, because you don't have as many dehydration issues.

That sounds fascinating. Could you link to some non-paywalled examples?

4lukeprog
Here are a few.

The odds aren't good, but here's hoping.

Amusingly, I just wrote an (I think better) article about the same thing.

http://www.makeuseof.com/tag/heres-scientists-think-worried-artificial-intelligence/

Business Insider can probably muster more attention than I can though, so it's a tossup about who's actually being more productive here.