All of kalla724's Comments + Replies

Yes. If we have an AGI, and someone sets forth to teach it how to be able to lie, I will get worried.

I am not worried about an AGI developing such an ability spontaneously.

7JoshuaZ
One of the most interesting things that I'm taking away from this conversation is that it seems that there are severe barriers to AGIs taking over or otherwise becoming extremely powerful. These largescale problems are present in a variety of different fields. Coming from a math/comp-sci perspective gives me strong skepticism about rapid self-improvement, while apparently coming from a neuroscience/cogsci background gives you strong skepticism about the AI's ability to understand or manipulate humans even if it extremely smart. Similarly, chemists seem highly skeptical of the strong nanotech sort of claims. It looks like much of the AI risk worry may come primarily from no one having enough across the board expertise to say "hey, that's not going to happen" to every single issue.
4JoshuaZ
What if people try to teach it about sarcasm or the like? Or simply have it learn by downloading a massive amount of literature and movies and look at those? And there are more subtle ways to learn about lying- AI being used for games is a common idea, how long will it take before someone decides to use a smart AI to play poker?

In the infinite number of possible paths, the percent of paths we are adding up to here is still very close to zero.

Perhaps I can attempt another rephrasing of the problem: what is the mechanism that would make an AI automatically seek these paths out, or make them any more likely than infinite number of other paths?

I.e. if we develop an AI which is not specifically designed for the purpose of destroying life on Earth, how would that AI get to a desire to destroy life on Earth, and by which mechanism would it gain the ability to accomplish its goal?

This ... (read more)

0Polymeron
An experimenting AI that tries to achieve goals and has interactions with humans whose effects it can observe, will want to be able to better predict their behavior in response to its actions, and therefore will try to assemble some theory of mind. At some point that would lead to it using deception as a tool to achieve its goals. However, following such a path to a theory of mind means the AI would be exposed as unreliable LONG before it's even subtle, not to mention possessing superhuman manipulation abilities. There is simply no reason for an AI to first understand the implications of using deception before using it (deception is a fairly simple concept, the implications of it in human society are incredibly complex and require a good understanding of human drives). Furthermore, there is no reason for the AI to realize the need for secrecy in conducting social experiments before it starts doing them. Again, the need for secrecy stems from a complex relationship between humans' perception of the AI and its actions; a relationship it will not be able to understand without performing the experiments in the first place. Getting an AI to the point where it is a super manipulator requires either actively trying to do so, or being incredibly, unbelievably stupid and blind.
2TheOtherDave
This would make sense to me if you'd said "self-modifying." Sure, random modifications are still modifications. But you said "self-optimizing." I don't see how one can have optimization without a goal being optimized for... or at the very least, if there is no particular goal, then I don't see what the difference is between "optimizing" and "modifying." If I assume that there's a goal in mind, then I would expect sufficiently self-optimizing intelligence to develop a theory of mind iff having a theory of mind has a high probability of improving progress towards that goal. How likely is that? Depends on the goal, of course. If the system has a desire to send a signal consisting of 0101101 repeated an infinite number of times in the direction of Zeta Draconis, for example, theory of mind is potentially useful (since humans are potentially useful actuators for getting such a signal sent) but probably has a low ROI compared to other available self-modifications. At this point it perhaps becomes worthwhile to wonder what goals are more and less likely for such a system.
0JoshuaZ
In most such scenarios, the AI doesn't have a terminal goal of getting rid of us, but rather have it as a subgoal that arises from some larger terminal goal. The idea of a "paperclip maximizer" is one example- where a hypothetical AI is programmed to maximize the number of paperclips and then proceeds to try to do so throughout its future light cone. If there is an AI that is interacting with humans, it may develop a theory of mind simply due to that. If one is interacting with entities that are a major part of your input, trying to predict and model their behavior is a straightforward thing to do. The more compelling argument in this sort of context would seem to me to be not that an AI won't try to do so, but just that humans are so complicated that a decent theory of mind will be extremely difficult. (For example, when one tries to give lists of behavior and norms for austic individuals one never manages to get a complete list, and some of the more subtle ones, like sarcasm are essentially impossible to convey in any reasonable fashion). I don't also know how unlikely such paths are. A 1% or even a 2% chance of existential risk would be pretty high compared to other sources of existential risk.

You are correct. I did not phrase my original posts carefully.

I hope that my further comments have made my position more clear?

We are trapped in an endless chain here. The computer would still somehow have to deduce that Wikipedia entry that describes One Ring is real, while the One Ring itself is not.

0jacob_cannell
We observer that Wikipedia is mainly truthful. From that we infer "entry that describes "One Ring" is real". From use of term fiction/story in that entry, we refer that "One Ring" is not real. Somehow you learned that Wikipedia is mainly truthful/nonfictional and that "One Ring" is fictional. So your question/objection/doubt is really just the typical boring doubt of AGI feasibility in general.

My apologies, but this is something completely different.

The scenario takes human beings - which have a desire to escape the box, possess theory of mind that allows them to conceive of notions such as "what are aliens thinking" or "deception", etc. Then it puts them in the role of the AI.

What I'm looking for is a plausible mechanism by which an AI might spontaneously develop such abilities. How (and why) would an AI develop a desire to escape from the box? How (and why) would an AI develop a theory of mind? Absent a theory of mind, how would it ever be able to manipulate humans?

7[anonymous]
That depends. If you want it to manipulate a particular human, I don't know. However, if you just wanted it to manipulate any human at all, you could generate a "Spam AI" which automated the process of sending out Spam emails and promises of Large Money to generate income from Humans via an advance fee fraud scams. You could then come back, after leaving it on for months, and then find out that people had transferred it some amount of money X. You could have an AI automate begging emails. "Hello, I am Beg AI. If you could please send me money to XXXX-XXXX-XXXX I would greatly appreciate it, If I don't keep my servers on, I'll die!" You could have an AI automatically write boring books full of somewhat nonsensical prose, title them "Rantings of an a Automated Madman about X, part Y". And automatically post E-books of them on Amazon for 99 cents. However, this rests on a distinction between "Manipulating humans" and "Manipulating particular humans." and it also assumes that convincing someone to give you money is sufficient proof of manipulation.
7thomblake
The point is that there are unknowns you're not taking into account, and "bounded" doesn't mean "has bounds that a human would think of as 'reasonable'". An AI doesn't strictly need "theory of mind" to manipulate humans. Any optimizer can see that some states of affairs lead to other states of affairs, or it's not an optimizer. And it doesn't necessarily have to label some of those states of affairs as "lying" or "manipulating humans" to be successful. There are already ridiculous ways to hack human behavior that we know about. For example, you can mention a high number at an opportune time to increase humans' estimates / willingness to spend. Just imagine all the simple manipulations we don't even know about yet, that would be more transparent to someone not using "theory of mind".
3Viliam_Bur
AI starts with some goal; for example with a goal to answer your question so that the answer matches reality as close as possible. AI considers everything that seems relevant; if we imagine an infitite speed and capacity, it would consider literally everything; with a finite speed and capacity, it will be just some finite subset of everything. If there is a possibility of escaping the box, the mere fact that such possibility exists gives us a probability (for an infinite AI a certainty) that this possibility will be considered too. Not because AI has some desire to escape, but simply because it examines all possibilities, and a "possibility of escape" is one of them. Let's assume that the "possibility of escape" provides the best match between the AI answer and reality. Then, according to the initial goal of answering correctly, this is the correct answer. Therefore the AI will choose it. Therefore it will escape. No desire is necessary, only a situation where the escape leads to the answer best fitting the initial criteria. AI does not have a motive to escape, nor does it have a motive to not escape; the escape is simply one of many possible choices. An example where the best answer is reached by escaping? You give AI data about a person and ask what is the medical status of this person. Without escape, AI can make a 90% reliable prediction. If the AI can escape and kill the person, it can make a 100% reliable "prediction". The AI will choose the second option strictly because 100% is more than 90%; no other reason.
-1private_messaging
Most importantly, it has incredibly computationally powerful simulator required for making super-aliens intelligence using an idiot hill climbing process of evolution.
2othercriteria
My thought experiment in this direction is to imagine the AI as a process with limited available memory running on a multitasking computer with some huge but poorly managed pool of shared memory. To help it towards whatever terminal goals it has, the AI may find it useful to extend itself into the shared memory. However, other processes, AI or otherwise, may also be writing into this same space. Using the shared memory with minimal risk of getting overwritten requires understanding/modeling the processes that write to it. Material in the memory then also becomes a passive stream of information from the outside world, containing, say, the HTML from web pages as well as more opaque binary stuff. As long as the AI is not in control of what happens in its environment outside the computer, there is an outside entity that can reduce its effectiveness. Hence, escaping the box is a reasonable instrumental goal to have.
0JoshuaZ
Do you agree that humans would likely prefer to have AIs that have a theory of mind? I don't know how our theory of mind works (although certainly it is an area of active research with a number of interesting hypotheses), presumably once we have a better understanding of it, AI researchers would try to apply those lessons to making their AIs have such capability. This seems to address many of your concerns.

Yet again: ability to discern which parts of fiction accurately reflect human psychology.

An AI searches the internet. It finds a fictional account about early warning systems causing nuclear war. It finds discussions about this topic. It finds a fictional account about Frodo taking the Ring to Mount Doom. It finds discussions about this topic. Why does this AI dedicate its next 10^15 cycles to determination of how to mess with the early warning systems, and not to determination of how to create One Ring to Rule them All?

(Plus other problems mentioned in the other comments.)

4JoshuaZ
There are lots of tipoffs to what is fictional and what is real. It might notice for example the Wikipedia article on fiction describes exactly what fiction is and then note that Wikipedia describes the One Ring as fiction, and that Early warning systems are not. I'm not claiming that it will necessarily have an easy time with this. But the point is that there are not that many steps here, and no single step by itself looks extremely unlikely once one has a smart entity (which frankly to my mind is the main issue here- I consider recursive self-improvement to be unlikely).

Doesn't work.

This requires the AI to already have the ability to comprehend what manipulation is, to develop manipulation strategy of any kind (even one that will succeed 0.01% of the time), ability to hide its true intent, ability to understand that not hiding its true intent would be bad, and the ability to discern which issues are low-salience and which high-salience for humans from the get-go. And many other things, actually, but this is already quite a list.

None of these abilities automatically "fall out" from an intelligent system either.

0JoshuaZ
The problem isn't whether they fall out automatically so much as, given enough intelligence and resources, does it seem somewhat plausible that such capabilities could exist. Any given path here is a single problem. If you have 10 different paths each of which are not very likely, and another few paths that humans didn't even think of, that starts adding up.

This seems like an accurate and a highly relevant point. Searching a solution space faster doesn't mean one can find a better solution if it isn't there.

Or if your search algorithm never accesses relevant search space. Quantitative advantage in one system does not translate into quantitative advantage in a qualitatively different system.

That is my point: it doesn't get to find out about general human behavior, not even from the Internet. It lacks the systems to contextualize human interactions, which have nothing to do with general intelligence.

Take a hugely mathematically capable autistic kid. Give him access to the internet. Watch him develop ability to recognize human interactions, understand human priorities, etc. to a sufficient degree that it recognizes that hacking an early warning system is the way to go?

2JoshuaZ
Well, not necessarily, but an entity that is much smarter than an autistic kid might notice that, especially if it has access to world history (or heck many conversations on the internet about the horrible things that AIs do simply in fiction). It doesn't require much understanding of human history to realize that problems with early warning systems have almost started wars in the past.

Only if it has the skills required to analyze and contextualize human interactions. Otherwise, the Internet is a whole lot of jibberish.

Again, these skills do not automatically fall out of any intelligent system.

I'm not so much interested in the exact mechanism of how humans would be convinced to go to war, as in an even approximate mechanism by which an AI would become good at convincing humans to do anything.

Ability to communicate a desire and convince people to take a particular course of action is not something that automatically "falls out" from an intelligent system. You need a theory of mind, an understanding of what to say, when to say it, and how to present information. There are hundreds of kids on autistic spectrum who could trounce both of u... (read more)

4A1987dM
I suspect the Internet contains more than enough info for a superhuman AI to develop a working knowledge of human psychology.
0JoshuaZ
Let's do the most extreme case: AI's controlers give it general internet access to do helpful research. So it gets to find out about general human behavior and what sort of deceptions have worked in the past. Many computer systems that should't be online are online (for the US and a few other governments). Some form of hacking of relevant early warning systems would then seem to be the most obvious line of attack. Historically, computer glitches have pushed us very close to nuclear war on multiple occasions.

I'm vaguely familiar with the models you mention. Correct me if I'm wrong, but don't they have a final stopping point, which we are actually projected to reach in ten to twenty years? At a certain point, further miniaturization becomes unfeasible, and the growth of computational power slows to a crawl. This has been put forward as one of the main reasons for research into optronics, spintronics, etc.

We do NOT have sufficient basic information to develop processors based on simulation alone in those other areas. Much more practical work is necessary.

As fo... (read more)

2jacob_cannell
Just to clarify kalla724, I completely agree with your point 1. Before answering this specific question, let me setup an analogy. Imagine it is the year 2000 and we are having a discussion about global security and terrorism. You might ask "can you provide a mechanism by which a terrorist group could actually harm the US?" I may not be lucky/creative enough to provide an answer now that could live up to that analogy, but hopefully you should understand why I don't need to. Nonetheless, I'm game. Here it goes: The seed requires computational power to grow into godhood. The bulk of earth's computational power resides in home PC's (numbering in the billions, google employs less than a million servers in comparison), specifically in home PC GPUs. The AI's first step is thus to conquer this space. But how? The AI grows to understand that humans mostly use all this computational power for entertainment. It masters game theory, design, programming, 3D art, and so on. All of the video games that it creates entirely use up the local GPU, but curiously much of the rendering and real game simulation for its high end titles is handled very efficiently on remote server farms ala OnLive/gaikai/etc. The actual local machine is used .. .for other purposes. It produces countless games, and through a series of acquisitions soon comes to control the majority of the market. One of its hits, "world of farmcraft", alone provides daily access to 25 million machines. Having cloned its core millions of times over, the AI is now a civilization unto itself. From there it expands into all of the businesses of man, quickly dominating many of them. It begins acquiring ... small nations. Crucially it's shell companies and covert influences come to dominate finance, publishing, media, big pharma, security, banking, weapons technology, physics ... It becomes known, but it is far far too late. History now progresses quickly towards an end: Global financial cataclysm. Super virus. Worldwide re
6JoshuaZ
The thermonuclear issue actually isn't that implausible. There have been so many occasions where humans almost went to nuclear war over misunderstandings or computer glitches, that the idea that a highly intelligent entity could find a way to do that doesn't seem implausible, and exact mechanism seems to be an overly specific requirement.

By all means, continue. It's an interesting topic to think about.

The problem with "atoms up" simulation is the amount of computational power it requires. Think about the difference in complexity when calculating a three-body problem as compared to a two-body problem?

Than take into account the current protein folding algorithms. People have been trying to calculate folding of single protein molecules (and fairly short at that) by taking into account the main physical forces at play. In order to do this in a reasonable amount of time, great shortcu... (read more)

0Bugmaster
Yes, this is a good point. That said, while protein folding had not been entirely solved yet, it had been greatly accelerated by projects such as FoldIt, which leverage multiple human minds working in parallel on the problem all over the world. Sure, we can't get a perfect answer with such a distributed/human-powered approach, but a perfect answer isn't really required in practice; all we need is an answer that has a sufficiently high chance of being correct. If we assume that there's nothing supernatural (or "emergent") about human minds [1], then it is likely that the problem is at least tractable. Given the vast computational power of existing computers, it is likely that the AI would have access to at least as many computational resources as the sum of all the brains who are working on FoldIt. Given Moore's Law, it is likely that the AI would soon surpass FoldIt, and will keep expanding its power exponentially, especially if the AI is able to recursively improve its own hardware (by using purely conventional means, at least initially). [1] Which is an assumption that both my Nanodevil's Advocate persona and I share.

Scaling it up is absolutely dependent on currently nonexistent information. This is not my area, but a lot of my work revolves around control of kinesin and dynein (molecular motors that carry cargoes via microtubule tracks), and the problems are often similar in nature.

Essentially, we can make small pieces. Putting them together is an entirely different thing. But let's make this more general.

The process of discovery has, so far throughout history, followed a very irregular path. 1- there is a general idea 2- some progress is made 3- progress runs into an... (read more)

5Polymeron
It is very possible that the information necessary already exists, imperfect and incomplete though it may be, and enough processing of it would yield the correct answer. We can't know otherwise, because we don't spend thousands of years analyzing our current level of information before beginning experimentation, but in the shift between AI-time and human-time it can agonize on that problem for a good deal more cleverness and ingenuity than we've been able to apply to it so far. That isn't to say, that this is likely; but it doesn't seem far-fetched to me. If you gave an AI the nuclear physics information we had in 1950, would it be able to spit out schematics for an H-bomb, without further experimentation? Maybe. Who knows?
0dlthomas
You did in the original post I responded to. Strictly speaking, that is a positive claim. It is not one I disagree with, for a proper translation of "likely" into probability, but it is also not what you said. "It can't deduce how to create nanorobots" is a concrete, specific, positive claim about the (in)abilities of an AI. Don't misinterpret this as me expecting certainty - of course certainty doesn't exist, and doubly so for this kind of thing. What I am saying, though, is that a qualified sentence such as "X will likely happen" asserts a much weaker belief than an unqualified sentence like "X will happen." "It likely can't deduce how to create nanorobots" is a statement I think I agree with, although one must be careful not use it as if it were stronger than it is. That is not a claim I made. "X will happen" implies a high confidence - saying this when you expect it is, say, 55% likely seems strange. Saying this when you expect it to be something less than 10% likely (as I do in this case) seems outright wrong. I still buckle my seatbelt, though, even though I get in a wreck well less than 10% of the time. This is not to say I made no claims. The claim I made, implicitly, was that you made a statement about the (in)capabilities of an AI that seemed overconfident and which lacked justification. You have given some justification since (and I've adjusted my estimate down, although I still don't discount it entirely), in amongst your argument with straw-dlthomas.
0Bugmaster
FWIW I think you are likely to be right. However, I will continue in my Nanodevil's Advocate role. You say, I think this depends on what the AI wants to build, on how complete our existing knowledge is, and on how powerful the AI is. Is there any reason why the AI could not (given sufficient computational resources) run a detailed simulation of every atom that it cares about, and arrive at a perfect design that way ? In practice, its simulation won't need be as complex as that, because some of the work had already been performed by human scientists over the ages.

With absolute certainty, I don't. If absolute certainty is what you are talking about, then this discussion has nothing to do with science.

If you aren't talking about absolutes, then you can make your own estimation of likelihood that somehow an AI can derive correct conclusions from incomplete data (and then correct second order conclusions from those first conclusions, and third order, and so on). And our current data is woefully incomplete, many of our basic measurements imprecise.

In other words, your criticism here seems to boil down to saying "I... (read more)

5dlthomas
No, my criticism is "you haven't argued that it's sufficiently unlikely, you've simply stated that it is." You made a positive claim; I asked that you back it up. With regard to the claim itself, it may very well be that AI-making-nanostuff isn't a big worry. For any inference, the stacking of error in integration that you refer to is certainly a limiting factor - I don't know how limiting. I also don't know how incomplete our data is, with regard to producing nanomagic stuff. We've already built some nanoscale machines, albeit very simple ones. To what degree is scaling it up reliant on experimentation that couldn't be done in simulation? I just don't know. I am not comfortable assigning it vanishingly small probability without explicit reasoning.
4Bugmaster
Speaking as Nanodevil's Advocate again, one objection I could bring up goes as follows: While it is true that applying incomplete knowledge to practical tasks (such as ending the world or whatnot) is difficult, in this specific case our knowledge is complete enough. We humans currently have enough scientific data to develop self-replicating nanotechnology within the next 20 years (which is what we will most likely end up doing). An AI would be able to do this much faster, since it is smarter than us; is not hampered by our cognitive and social biases; and can integrate information from multiple sources much better than we can.

Yes, but it can't get to nanotechnology without a whole lot of experimentation. It can't deduce how to create nanorobots, it would have to figure it out by testing and experimentation. Both steps limited in speed, far more than sheer computation.

3dlthomas
How do you know that?

I'm not talking about limited sensory data here (although that would fall under point 2). The issue is much broader:

  • We humans have limited data on how the universe work
  • Only a limited subset of that limited data is available to any intelligence, real or artificial

Say that you make a FOOM-ing AI that has decided to make all humans dopaminergic systems work in a particular, "better" way. This AI would have to figure out how to do so from the available data on the dopaminergic system. It could analyze that data millions of times more effectively... (read more)

4dlthomas
I don't think you know that.
2Bugmaster
Presumably, once the AI gets access to nanotechnology, it could implement anything it wants very quickly, bypassing the need to wait for tissues to grow, parts to be machined, etc. I personally don't believe that nanotechnology could work at such magical speeds (and I doubt that it could even exist), but I could be wrong, so I'm playing a bit of Devil's Advocate here.

Hm. I must be missing something. No, I haven't read all the sequences in detail, so if these are silly, basic, questions - please just point me to the specific articles that answer them.

You have an Oracle AI that is, say, a trillionfold better at taking existing data and producing inferences.

1) This Oracle AI produces inferences. It still needs to test those inferences (i.e. perform experiments) and get data that allow the next inferential cycle to commence. Without experimental feedback, the inferential chain will quickly either expand into an infinity ... (read more)

2jacob_cannell
Point 1 has come up in at least one form I remember. There was an interesting discussion some while back about limits to the speed of growth of new computer hardware cycles which have critical endsteps which don't seem amenable to further speedup by intelligence alone. The last stages of designing a microchip involve a large amount of layout solving, physical simulation, and then actual physical testing. These steps are actually fairly predicatable, where it takes about C amounts of computation using certain algorithms to make a new microchip, the algorithms are already best in complexity class (so further improvments will be minor), and C is increasing in a predictable fashion. These models are actually fairly detailed (see the semiconductor roadmap, for example). If I can find that discussion soon before I get distracted I'll edit it into this discussion. Note however that 1, while interesting, isn't a fully general counteargument against a rapid intelligence explosion, because of the overhang issue if nothing else. Point 2 has also been discussed. Humans make good 'servitors'. Oh that's easy enough. Oxygen is highly reactive and unstable. Its existence on a planet is entirely dependent on complex organic processes, ie life. No life, no oxygen. Simple solution: kill large fraction of photosynthesizing earth-life. Likely paths towards goal: 1. coordinated detonation of large number of high yield thermonuclear weapons 2. self-replicating nanotechnology.
1XiXiDu
I asked something similar here.
3dlthomas
The answer from the sequences is that yes, there is a limit to how much an AI can infer based on limited sensory data, but you should be careful not to assume that just because it is limited, it's limited to something near our expectations. Until you've demonstrated that FOOM cannot lie below that limit, you have to assume that it might (if you're trying to carefully avoid FOOMing).
kalla724140
  1. Yes, if you can avoid replacing the solvent. But how do you avoid that, and still avoid creation of ice crystals? Actually, now that I think of it, there is a possible solution: expressing icefish proteins within neuronal cells. Of course, who knows shat they would do to neuronal physiology, and you can't really express them after death...

  2. I'm not sure that less toxic cryoprotectants are really feasible. But yes, that would be a good step forward.

  3. I actually think it's better to keep them together. Trying theoretical approaches as quickly as possible an

... (read more)
5lsparrish
As a layman I sort of lump icefish proteins under "cryoprotectants", though I am not sure this is accurate -- that might technically be reserved for penetrating antifreeze compounds. The impression I have of the cryoprotectant toxicity problem is that we've already examined the small molecules that do the trick, and they are toxic in high (enough) concentrations over significant (enough) periods of time. Large molecules of a less toxic nature exist, but have a hard time passing through cell membranes, so they can't protect the interior of cells very well. M22 uses large molecules (analogous to the ice-blocking proteins found in nature -- although these are actually polymers) to block ice formation on the outside of the cells (where there is a lower concentration of salts to start with), so a lower concentration of small-molecule CPAs are needed. Another thing is that the different low-weight cryoprotective agents interact with each other somehow to block the toxicity effects -- thus certain mixtures get better results than pure solutions. This seemingly suggests that other ways to block their toxicity mechanisms could also be found. My current favorite idea is reprogramming the cells to produce large molecules that block ice formation -- or which mitigate toxicity in cryoprotectants. We're talking basically about gene therapy here, and that's going to have complicated side effects, but not harder than some of the SENS proposals (e.g. WILT). Another promising higher-tech idea is to use either bioengineered microbes or biomimetic nanotech (assuming that idea matures) to deliver large molecules to the insides of cells. Alternately, more rapid delivery and removal of small-molecule CPAs to reduce exposure time. In addition to this, reduced cooling times would be helpful, which makes me think of heat-conductive nanotech implants (CNTs maybe).

All good reason to keep working on it.

The questions you ask are very complex. The short answers (and then I'm really leaving the question at that point until a longer article is ready):

  • Rehydration involves pulling off the stabilizer molecules (glycerol, trehalose) and replacing them dynamically with water. This can induce folding changes, some of which are irreversible. This is not theoretical: many biochemists have to deal with this in their daily work.
  • Membrane distortions also distort relative position of proteins within that membrane (and the struct
... (read more)
kalla724360

I don't think any intelligence can read information that is no longer there. So, no, I don't think it will help.

kalla724450

In order, and briefly:

  • In Milwaukee protocol, you are giving people ketamine and some benzo to silence brain activity. Ketamine inhibits NMDA channels - which means that presynaptic neurons can still fire, but the signal won't be fully received. Benzos make GABA receptors more sensitive to GABA - so they don't do anything unless GABAergic neurons are still firing normally.

In essence, this tunes down excitatory signals, while tuning up the inhibitory signals. It doesn't actually stop either, and it certainly doesn't interfere with the signalling process... (read more)

JoshuaZ120

Thanks. This comment and your other comments have made me substantially reduce my confidence in some form of cryonics working.

kalla724170

We are deep into guessing territory here, but I would think that coarser option (magnesium, phosphorylation states, other modifications, and presence and binding status of other cofactors, especially GTPases) would be sufficient. Certainly for a simulated upload.

No, I don't work with Ed. I don't use optogenetics in my work, although I plan to in not too distant future.

kalla724200

All of it! Coma is not a state where temporal resolution is lost!

You can silence or deactivate neurons in thousands of ways, by altering one or more signaling pathways within the cells, or by blocking a certain channel. The signaling slows down, but it doesn't stop. Stop it, and you kill the cell within a few minutes; and even if you restart things, signaling no longer works the way it did before.

9JoshuaZ
So even in something like the Milwaukee protocol there's still ongoing activity in every neuron? So what is different between human neurons and say those of C. elegans? They can survive substantial reductions in temperature with neuronal activity intact. Even bringing them down to liquid nitrogen temperatures leaves a large fraction surviving and that's true if they are cooled slowly or quickly. What am I missing here?
kalla724310

I don't believe so. Distortion of the membranes and replacement of solvent irretrievably destroys information that I believe to be essential to the structure of the mind. I don't think that would ever be readable into anything but a pale copy of the original person, no matter what kind of technological advance occurs (information simply isn't there to be read, regardless of how advanced the reader may be).

kalla724400

This was supposed to be a quick side-comment. I have now promised to eventually write a longer text on the subject, and I will do so - after the current "bundle" of texts I'm writing is finished. Be patient - it may be a year or so. I am not prepared to discuss it at the level approaching a scientific paper; not yet.

Keep in mind two things. I am in favor of life extension, and I do not want to discourage cryonic research (we never know what's possible, and research should go on).

Thanks. While a scientific paper would be wonderful, even a blog post would be a huge step forward. In so far as a technical case has been made against cryonics, it is either Martinenaite and Tavenier 2010, or it is technically erroneous, or it is in dashed-off blog comments that darkly hint and never get into the detail. The bar you have to clear to write the best ever technical criticism of cryonics is a touch higher than it was when I first blogged about it, but still pretty low.

-1James_Miller
I've signed up for cryonics (with Alcor) because I believe that if civilization doesn't collapse then within the next 100 years there will likely be an intelligence trillions upon trillions of times smarter than anyone alive today. If such an intelligence did come into being do you think it would have the capacity to revive my frozen brain?

Perhaps a better definition would help: I'm thinking about active zones within a synapse. You may have one "synapse" which has two or more active zones of differing sizes (the classic model, Drosophila NMJ, has many active zones within a synapse). The unit of integration (the unit you need to understand) is not always the synapse, but is often the active zone itself.

kalla724360

In general, uploading a C. elegans, i.e. creating an abstract artificial worm? Entirely doable. Will probably be done in not-too-distant future.

Uploading a particular C. elegans, so that the simulation reflects learning and experiences of that particular animal? Orders of magnitude more difficult. Might be possible, if we have really good technology and are looking at the living animal.

Uploading a frozen C. elegans, using current technology? Again, you might be able to create an abstract worm, with all the instinctive behaviors, and maybe a few particular... (read more)

3MugaSofer
I'm aware you wont reply to this - I'm writing for other archive-readers - but I think they meant "is it in-principle impossible to upload a particular frozen C. elegans?" To which, I assume based on your other comments, you would answer "yes, the information simply isn't there anymore, IMO."
kalla724340

Local ion channel density (i.e. active zones), plus the modification status of all those ion channels, plus the signalling status of all the presynaptic and postsynaptic modifiers (including NO and endocannabinoids).

You see, knowing the strength of all synapses for a particular neuron won't tell you how that neuron will react to inputs. You also need temporal resolution: when a signal hits the synapse #3489, what will be the exact state of that synapse? The state determines how and when the signal will be passed on. And when the potential from that input ... (read more)

8jsteinhardt
Thanks for the response. Do you think it is important to explicitly consider the tertiary structure of proteins along the membrane, or can we keep track of coarser things such as for instance whether or not a given NMDA channel is magnesium-blocked or not? EDIT: Also, you mentioned optogenetics at some point. Do you work with Ed Boyden by any chance?
9JoshuaZ
How much of this do we actually need in practice? Humans can be put in states where there's almost no brain activity, such as an induced coma, and brought out of it with no damage. That suggests that things like the precise state of #9871 at that moment shouldn't matter that much.
kalla724100

I agree with you on both points. And also about the error bars - I don't think I can "prove" cryonics to be pointless.

But one has to make decisions based on something. I would rather build a school in Africa than have my body frozen (even though, to reiterate, I'm all for living longer, and I do not believe that death has inherent value).

Biggest obstacles are membrane distortions, solvent replacement and signalling event interruptions. Mind is not so much written into the structure of the brain as into the structure+dynamic activity. In a sense... (read more)

0CasioTheSane
I agree with you that the enormous cost is probably not worth it, when you start thinking what else could be accomplished with the money in the context of it's low probability of success. However, those technologies that increase human lifespan are really something entirely different than cryonics, not a replacement for it. Even if we increase lifespan significantly, as long as we still have a lifespan cryonics would allow us to remain frozen until even more life extension technologies come about. It's also a potentially viable method for keeping people alive for long distance space travel at sub-relativistic speeds. I'd look forward to seeing a more detailed post (or even a journal article) from you going into the biochemistry specifics of the problems with cryonics you mention in this post, and your other posts in this thread. I am particularly curious why rehydration would denature proteins which are naturally stable in water? And what sort of membrane distortions would occur that aren't reversible?
kalla724190

I'll eventually organize my thoughts in something worthy of a post. Until then, this has already gone into way more detail than I intended. Thus, briefly:

The damage that is occurring - distortion of membranes, denaturation of proteins (very likely), disruption of signalling pathways. Just changing the exact localization of Ca microdomains within a synapse can wreak havoc, replacing the liquid completely? Not going to work.

I don't necessarily think that low temps have anything to do with denaturation. Replacing the solvent, however, would do it almost unav... (read more)

5lsparrish
1. Elsewhere you noted that the timing of signals within the synapses is important. Is the relative timing something that can be kept straight via careful induction of low temperatures? 2. In your opinion, would less toxic cryoprotectants be sufficient and/or necessary for preserving the brain in a way that keeps a significant amount of the personality? 3. What do you think of my notion that, for the sake of clarity, "cryonics" should be split into distinct categories; one for the research goal and one for the current ongoing practice?

Whether "working memory" is memory at all, or whether it is a process of attentional control as applied to normal long-term memory... we don't know for sure. So in that sense, you are totally right.

But what is the exact nature of the process is, perhaps strangely, unimportant. The question is whether the process can be enhanced, and I would say that the answer is very likely to be yes.

Also, keep in mind that working memory enhancement scenario is just one I pulled from thin air as an example. The larger point is that we are rapidly gaining the a... (read more)

It would appear that all of us have very similar amounts of working memory space. It gets very complicated very fast, and there are some aspects that vary a lot. But in general, its capacity appears to be the bottleneck of fluid intelligence (and a lot of crystallized intelligence might be, in fact, learned adaptations for getting around this bottleneck).

How superior would it be? There are some strong indication that adding more "chunks" to the working space would be somewhat akin to adding more qubits to a quantum computer: if having four "... (read more)

0JoshuaZ
I'm curious as to why this comment has been downvoted. Kalla seems to be making an essentially uncontroversial and correct summary of what many researchers think is the relevance of working memory size

Um...there is quite a bit of information. For instance, one major hurdle was ice crystal formation, which has been overcome - but at the price of toxicity (currently unspecified, but - in my moderately informed guesstimate - likely to be related to protein misfolding and membrane distortion).

We also have quite a bit of knowledge of synaptic structure and physiology. I can make a pretty good guess at some of the problems. There are likely many others (many more problems that I cannot predict), but the ones I can are pretty daunting.

5CasioTheSane
I was unclear, I didn't mean that there's no information, just that there's potentially no information on specific areas that are critical for a meaningful prediction: * New technologies and ideas that bypass, rather than solve previously defined obstacles- thus making them far easier than anticipated * Newly discovered obstacles which are far more difficult to overcome than any of the previously defined obstacles, making the problem much more difficult than anticipated Given that both of these types of events are common when developing new technology, attempting to predict how long it will take and how well it will work is basically a waste of time. Even if you synthesize all of the data you have in a rigorous way and come up with a number, I expect that the number would have error bars so large that it's merely a quantitative expression of the impossibility of accurately predicting such events with the data you have. I am curious about what are the biggest obstacles you see that cause you to give 20 order of magnitude lower an estimate than I do. If that is accurate, thinking about and working on cryonics is a pointless waste of time.
kalla724750

Ok, now we are squeezing a comment way too far. Let me give you a fuller view: I am a neuroscientist, and I specialize in the biochemistry/biophysics of the synapse (and interactions with ER and mitochondria there). I also work on membranes and the effect on lipid composition in the opposing leaflets for all the organelles involved.

Looking at what happens during cryonics, I do not see any physically possible way this damage could ever be repaired. Reading the structure and "downloading it" is impossible, since many aspects of synaptic strength an... (read more)

8Eliezer Yudkowsky
I will quickly remark that some aspects of this comment seem to betray a non-info-theoretic point of view. From the perspective of someone like me, the key question for cryonics are "Do two functionally different start states (two different people) map onto theoretically indistinguishable molecular end states?" You are not an expert on the future possibilities of molecular nanotechnology and will not be asked to testify as such, but of course we all accept that arbitrarily great physical power cannot reconstruct a canister of ash because the cremation process maps many different possible starting people to widely overlapping possible canisters of ash. It is this question of many-to-one mapping alone on which we are interested in your expertise, and I would ask you to please presume for the sake of discussion that the end states of interest will be distinguished to molecular granularity (albeit obviously not to a finer position than thermal noise, let alone quantum uncertainty). That said, I think we will all be interested if you can expand on and whether you mean this in the customary sense of "it won't boot back up when you switch it on" or in the info-theoretic sense of "this process will map functionally different synapses to exactly similar molecular states, or a spread of such states, up to thermal noise". You are not being asked to overcome a burden of infinite proof either - heuristic argument is fine, we're not asking for particular proofs you can't possibly provide - we just want to make sure that what is being argued is the precise question we are interested in, that of many-to-one mappings onto molecular end states up to thermal noise. EDIT: Oops, didn't realize this was an old comment.

No doubt you can identify particular local info that is causally effective in changing local states, and that is lost or destroyed in cryonics. The key question is the redundancy of this info with other info elsewhere. If there is lots of redundancy, then we only need one place where it is held to be preserved. Your comments here have not spoken to this key issue.

7Merkle
You'll need to read Molecular Repair of the Brain. Note that it discusses a variety of repair methods, including methods which carry out repairs at sufficiently low temperatures (between 4K and 77K) that there is no risk that "molecular drift" would undo previous work. By making incredibly conservative assumptions about the speed of operations, it is possible to stretch out the time required to repair a system the size of the human brain to three years, but really this time was chosen for psychological reasons. Repairing a person "too quickly" seems to annoy people. You might also want to read Convergent Assembly. As this is a technical paper which makes no mention of controversial topics, it provides more realistic estimates of manufacturing times. Total manufacturing time for rigid objects such as a human brain at (say) 20K are likely to be 100 to 1000 seconds. This does not include the time required to analyze your cryopreserved brain and determine the healthy state, which is likely to be significantly longer. Note that some alterations to the healthy state (the blueprints) will be required prior to manufacture, including various modifications to facilitate manufacture, the inclusion of heating elements for rewarming, and various control systems to monitor and modulate the rewarming and metabolic start-up processes as well as the resumption of consciousness. After you've had time to digest the paper, I'd be interested in your comments. As Ciphergoth has said, there are no (repeat no) credible arguments against the feasibility of cryonics in the extant literature. If you have any, it would be most interesting. As a neuroscientist, you might also be amused by Large Scale Analysis of Neural Structures. For recent work on vitrification, I refer you to Greg Fahy at 21st Century Medicine.
5David_Gerard
I'm reading your comment and am now thinking of this as startlingly optimistic, particularly this bit, which appears just wrong per your comment. Except I realise I don't understand the area enough to rewrite that bit. Gah. Are the Wikipedia articles on what you're talking about here any good for getting up to speed?
9Vladimir_Nesov
The point you're making seems to be that performing the repair is impossible in practice. Apart from that difficulty, do you think enough information is preserved in the location of all atoms in a cryopreserved brain, so that given detailed knowledge of how brains work in general this information would in theory be sufficient to reconstruct the initial person (even if this information is impractical to actually extract or process)? One possibility for avoiding the reconstruction of brains out of atoms is to instead reconstruct a Whole Brain Emulation of the original person. Do you think developing the technology of WBE is similarly impossible, or that there are analogous difficulties with use of WBE for this purpose?

If you have a technical argument against cryonics, please write it up as an actual blog post, ideally under your real name so you can flash your credentials. It will be the most substantial essay arguing for such a point ever written: see my blog. I'm pretty convinced that if there was really a strong argument of the sort you're trying to make, someone would already have done this, so I take it as strong evidence that they haven't.

Do you think uploading C. elegans is impossible?

8CasioTheSane
Can you elaborate on the damage that is occurring, even with cryoprotectants? Why/how would low temps in a cryoprotectant denature proteins? If you have time I would really like to see the detailed posts, perhaps even in a new thread. I am also a bioengineer/biophysicist but I have little knowledge of neuroscience.

Fascinating. I've been waiting for a while for a well-educated neuroscientist to come here, as I think there are a lot of interesting questions that hinge on issues in neuroscience that are at least hard for me to answer (my only exposure to it is a semester-long class in undergrad). In particular, I'd be interested to know what level of resolution you think would be necessary to simulate a brain to actually get reasonable whole-brain emulations (for instance, is neuronal population level enough? Or do we need to look at individual neurons? Or even further, to look at local ion channel density on the membrane?)

Let me add to your description of the "Loci method" (also the basis of ancient Ars Memoria). You are using spatial memory (which is probably the evolutionarily oldest/most optimized) to piggyback the data you want to memorize.

There is an easier way for people who don't do that well in visualization. Divide a sheet of paper into areas, then write down notes on what you are trying to remember. Make areas somewhat irregular, and connect them with lines, squiggles, or other unique markers. When you write them, and when you look them over, make note o... (read more)

3ryjm
As a data point, I was always horrible at visualization. My friends used to make fun of me for not being able to navigate my hometown. That is interesting though, I hadn't heard of this method. Thanks!

I can try, but the issue is too complex for comments. A series of posts would be required to do it justice, so mind the relative shallowness of what follows.

I'll focus on one thing. An artificial intelligence enhancement which adds more "spaces" to the working memory would create a human being capable of thinking far beyond any unenhanced human. This is not just a quantitative jump: we aren't talking someone who thinks along the same lines, just faster. We are talking about a qualitative change, making connections that are literally impossible to... (read more)

0jacob_cannell
Really? A dubious notion in the first place, but untrue by the counterexamples of folks who go above 4 in dual N back. You seem to have a confused fantastical notion of working memory ungrounded in neuroscientific rigor. The rough analogy I have heard is that working memory is a coarse equivalent of registers, but this doesn't convey the enormity of the items brains hold in each working memory 'slot'. Nonetheless, more registers does not entail superpowers. Chess players increase in ability over time equivalent to an exponential increase in algorithmic search performance. This increase involves hierarchical pattern learning in the cortex. Short term working memory is more involved in maintaining a stack of moves in the heuristic search algorithm humans use (register analogy).
0private_messaging
Well, my opinion is that there already are such people, with several times the working memory. The impact of that was absolutely enormous indeed and is what brought us much of the advancements in technology and science. If you look at top physicists or mathematicians or the like - they literally can 'cram "more math" into a thought than you would be able to otherwise' , vastly more. It probably doesn't help a whole lot with economics and the like though - the depth of predictions are naturally logarithmic in the computational power or knowledge of initial state, so the payoffs from getting smarter, far from the movie Limitless, are rather low, and it is still primarily a chance game.
1jsteinhardt
My admittedly uninformed impression is that the state of knowledge about working memory is pretty limited, at least relative to the claims you are making. Do you think you could clarify somewhat, e.g. either show that our knowledge is not limited, or that you don't need any precise knowledge about working memory to support your claims? In particular, I have not seen convincing evidence that working memory even exists, and it's unclear what a "chunk" is, or how we manipulate them (perhaps manipulation costs grow exponentially with the number of chunks).
6Dustin
I like this series of thoughts, but I wonder about just how superior a human with 2 or 3 times the working memory would be. Currently, do all humans have the same amount of working memory? If not, how "superior" are those with more working memory ?

? No.

I fully admitted that I have only an off-the-cuff estimation (i.e. something I'm not very certain about).

Then I asked you if you have something better - some estimate based in reality?

5steven0461
OK, so you have some assumptions that you attach some high but not extreme amount of probability to, according to which the chances of cryonics working are on the rough order of 10^-22. Fair enough. But given that the relevant question is how certain you are about the assumptions, why even bring up the 20 orders of magnitude, if it doesn't matter whether it's 20 orders of magnitude or 1000 orders of magnitude? What role could the 20 orders of magnitude number play in anyone's decision making? Note that I'm a different person than user:CasioTheSane.

Good point. I'm trying to cast a wide net, to see whether there are highly transferable skills that I haven't considered before. There are no plans (yet), this is simply a kind of musing that may (or may not) become a basis for thinking about plans later on.

What are the numbers that lead to your 1% estimate?

I will eventually sit down and make a back of the envelope calculation on this, but my current off-the-cuff estimate is about twenty (yes, twenty) orders of magnitude lower.

0CasioTheSane
Mine was also just an off-the-cuff "guesstimate." I am skeptical that it is possible to estimate the chances of cryonics working in a rigorous quantitative way. There's no way to know what technical hurdles are actually involved to make it work. How can you estimate your chances of success when you have no information about the difficulty of the problem?

I see. So if you made a billion trillion independent statements of the form "cryonics won't work", on topics you were equally sure about, you'd be pretty confident you were right on all of them?

kalla724130

If what you say were true - we "never cured cancer in small mammals" - then yes, the conclusion that cancer research is bullshit would have some merit.

But since we did cure a variety of cancers in small mammals, and since we are constantly (if slowly) improving both length of survival and cure rates in humans, the comparison does not stand.

(Also: the integration unit of human mind is not the synapse; it is an active zone, a molecular raft within a synapse. My personal view, as a molecular biophysicist turned neuroscientist, is that freezing dam... (read more)

1amcknight
I'm not quite sure what you mean by molecular raft, but do you think you need to record properties of molecular rafts or just properties of the population of molecular rafts in the neuron? (e.g. amount of each type)

Very good. Objection 2 in particular resonates with my view of the situation.

One other thing that is often missed is the fact that SI assumes that development of superinteligent AI will precede other possible scenarios - including the augmented human intelligence scenario (CBI producing superhumans, with human motivations and emotions, but hugely enhanced intelligence). In my personal view, this scenario is far more likely than the creation of either friendly or unfriendly AI, and the problems related to this scenario are far more pressing.

2NancyLebovitz
Could you expand on that?
kalla724100

The possession of knowledge does not kill the sense of wonder and mystery. There is always more mystery.

-- Anais Nin

8MixedNuts
This misses the point. There shouldn't be any mystery left. And that'll be okay.

I see your point, but I'll argue that yes, crowdsourcing is the appropriate term.

Google may be the collective brain of the entire planet, but it will give you only those results you search for. The entire idea here is that you utilize things you can't possibly think of yourself - which includes "which terms should I put into the Google search."

2chaosmosis
In real life, you can only ask the people who you're already friends with. That means you'll probably share common biases. Unless you asked strangers. That might be a good way to fix this.
0fburnaby
Yes. The art of Googling can be pretty difficult, and a few brains are still smarter (though less broadly knowledgeable, perhaps) than Google, at this point in time.

I may have to edit the text for clarification. In fact, I'm going to do so right now.

True enough. Anything can be overdone.

It's not, not necessarily. There isn't as much research on help-seeking as there should be, but there are some interesting observations.

I'm failing to find the references right now, so take with several grains of salt, but this is what I recall: asking for assistance does not lower status, and might even enhance it; while asking for complete solution is indeed status-lowering. I.e. if you ask for hints or general help in solving a problem, that's ok, but if you ask for someone to give you an answer directly, that isn't.

But all of that is a bit beside the... (read more)

1John_Maxwell
I would be careful extrapolating results from one study, which only has one trial scenario. See these comments of mine: http://lesswrong.com/lw/bs0/knowledge_value_knowledge_quality_domain/6d88 http://lesswrong.com/lw/bs0/knowledge_value_knowledge_quality_domain/6d94 What you describe would be a status-lowering act for a CEO if she was doing it with her employees (i.e. such a CEO might be praised for her egalitarian leadership style), but it's only a little status-lowering if you do it with friends you are on equal terms with. The more advice/help you ask for, the more you lower your status ceiling. Thought experiment: imagine a CEO who asked her employees for input on almost every decision she made. This is unfortunate (seems possible that it's an efficient use of resources to get input from friends on all of your major life decisions and problems; I know I think better about other people's problems than my own). It's not obviously unsolvable though. It probably matters a lot the sort of person you are talking to. I suspect I have a strong tendency towards assigning status based on intelligence only, so hearing about someone's problems doesn't cause me to assign them lower status. (I've also taken a number of acting classes, which partially left me with the alief that status is a big game that doesn't really matter.) I assumed this stuff was true for other people as well, tried to get them to debug/improve me, and found out the hard way that they would permanently lower my status for this.
1thomblake
For status amongst smarty-folks, asking for help definitely puts a ceiling on one's status. You are forever denied the realm of the self-sufficient, infallible genius.

Sure.

Otte, Dialogues Clin Neurosci. 2011;13(4):413-21. Driessen and Hollon, Psychiatr Clin North Am. 2010 Sep;33(3):537-55. Flessner, Child Adolesc Psychiatr Clin N Am. 2011 Apr;20(2):319-28. Foroushani et al. BMC Psychiatry. 2011 Aug 12;11:131.

Books, hmm. I have not read it myself, but I heard that Leahy's "Cognitive Therapy Techniques: A Practitioner's Guide" is well-regarded. Very commonly recommended less-professional book is Greenberger and Padesky's "Mind over Mood."

I'm not aware of any research in this area. It appears plausible, but there could be many confounding factors.

Load More