In the infinite number of possible paths, the percent of paths we are adding up to here is still very close to zero.
Perhaps I can attempt another rephrasing of the problem: what is the mechanism that would make an AI automatically seek these paths out, or make them any more likely than infinite number of other paths?
I.e. if we develop an AI which is not specifically designed for the purpose of destroying life on Earth, how would that AI get to a desire to destroy life on Earth, and by which mechanism would it gain the ability to accomplish its goal?
This ...
You are correct. I did not phrase my original posts carefully.
I hope that my further comments have made my position more clear?
We are trapped in an endless chain here. The computer would still somehow have to deduce that Wikipedia entry that describes One Ring is real, while the One Ring itself is not.
My apologies, but this is something completely different.
The scenario takes human beings - which have a desire to escape the box, possess theory of mind that allows them to conceive of notions such as "what are aliens thinking" or "deception", etc. Then it puts them in the role of the AI.
What I'm looking for is a plausible mechanism by which an AI might spontaneously develop such abilities. How (and why) would an AI develop a desire to escape from the box? How (and why) would an AI develop a theory of mind? Absent a theory of mind, how would it ever be able to manipulate humans?
Yet again: ability to discern which parts of fiction accurately reflect human psychology.
An AI searches the internet. It finds a fictional account about early warning systems causing nuclear war. It finds discussions about this topic. It finds a fictional account about Frodo taking the Ring to Mount Doom. It finds discussions about this topic. Why does this AI dedicate its next 10^15 cycles to determination of how to mess with the early warning systems, and not to determination of how to create One Ring to Rule them All?
(Plus other problems mentioned in the other comments.)
Doesn't work.
This requires the AI to already have the ability to comprehend what manipulation is, to develop manipulation strategy of any kind (even one that will succeed 0.01% of the time), ability to hide its true intent, ability to understand that not hiding its true intent would be bad, and the ability to discern which issues are low-salience and which high-salience for humans from the get-go. And many other things, actually, but this is already quite a list.
None of these abilities automatically "fall out" from an intelligent system either.
This seems like an accurate and a highly relevant point. Searching a solution space faster doesn't mean one can find a better solution if it isn't there.
Or if your search algorithm never accesses relevant search space. Quantitative advantage in one system does not translate into quantitative advantage in a qualitatively different system.
That is my point: it doesn't get to find out about general human behavior, not even from the Internet. It lacks the systems to contextualize human interactions, which have nothing to do with general intelligence.
Take a hugely mathematically capable autistic kid. Give him access to the internet. Watch him develop ability to recognize human interactions, understand human priorities, etc. to a sufficient degree that it recognizes that hacking an early warning system is the way to go?
Only if it has the skills required to analyze and contextualize human interactions. Otherwise, the Internet is a whole lot of jibberish.
Again, these skills do not automatically fall out of any intelligent system.
I'm not so much interested in the exact mechanism of how humans would be convinced to go to war, as in an even approximate mechanism by which an AI would become good at convincing humans to do anything.
Ability to communicate a desire and convince people to take a particular course of action is not something that automatically "falls out" from an intelligent system. You need a theory of mind, an understanding of what to say, when to say it, and how to present information. There are hundreds of kids on autistic spectrum who could trounce both of u...
I'm vaguely familiar with the models you mention. Correct me if I'm wrong, but don't they have a final stopping point, which we are actually projected to reach in ten to twenty years? At a certain point, further miniaturization becomes unfeasible, and the growth of computational power slows to a crawl. This has been put forward as one of the main reasons for research into optronics, spintronics, etc.
We do NOT have sufficient basic information to develop processors based on simulation alone in those other areas. Much more practical work is necessary.
As fo...
By all means, continue. It's an interesting topic to think about.
The problem with "atoms up" simulation is the amount of computational power it requires. Think about the difference in complexity when calculating a three-body problem as compared to a two-body problem?
Than take into account the current protein folding algorithms. People have been trying to calculate folding of single protein molecules (and fairly short at that) by taking into account the main physical forces at play. In order to do this in a reasonable amount of time, great shortcu...
See my answer to dlthomas.
Scaling it up is absolutely dependent on currently nonexistent information. This is not my area, but a lot of my work revolves around control of kinesin and dynein (molecular motors that carry cargoes via microtubule tracks), and the problems are often similar in nature.
Essentially, we can make small pieces. Putting them together is an entirely different thing. But let's make this more general.
The process of discovery has, so far throughout history, followed a very irregular path. 1- there is a general idea 2- some progress is made 3- progress runs into an...
With absolute certainty, I don't. If absolute certainty is what you are talking about, then this discussion has nothing to do with science.
If you aren't talking about absolutes, then you can make your own estimation of likelihood that somehow an AI can derive correct conclusions from incomplete data (and then correct second order conclusions from those first conclusions, and third order, and so on). And our current data is woefully incomplete, many of our basic measurements imprecise.
In other words, your criticism here seems to boil down to saying "I...
Yes, but it can't get to nanotechnology without a whole lot of experimentation. It can't deduce how to create nanorobots, it would have to figure it out by testing and experimentation. Both steps limited in speed, far more than sheer computation.
I'm not talking about limited sensory data here (although that would fall under point 2). The issue is much broader:
Say that you make a FOOM-ing AI that has decided to make all humans dopaminergic systems work in a particular, "better" way. This AI would have to figure out how to do so from the available data on the dopaminergic system. It could analyze that data millions of times more effectively...
Hm. I must be missing something. No, I haven't read all the sequences in detail, so if these are silly, basic, questions - please just point me to the specific articles that answer them.
You have an Oracle AI that is, say, a trillionfold better at taking existing data and producing inferences.
1) This Oracle AI produces inferences. It still needs to test those inferences (i.e. perform experiments) and get data that allow the next inferential cycle to commence. Without experimental feedback, the inferential chain will quickly either expand into an infinity ...
Yes, if you can avoid replacing the solvent. But how do you avoid that, and still avoid creation of ice crystals? Actually, now that I think of it, there is a possible solution: expressing icefish proteins within neuronal cells. Of course, who knows shat they would do to neuronal physiology, and you can't really express them after death...
I'm not sure that less toxic cryoprotectants are really feasible. But yes, that would be a good step forward.
I actually think it's better to keep them together. Trying theoretical approaches as quickly as possible an
All good reason to keep working on it.
The questions you ask are very complex. The short answers (and then I'm really leaving the question at that point until a longer article is ready):
I don't think any intelligence can read information that is no longer there. So, no, I don't think it will help.
In order, and briefly:
In essence, this tunes down excitatory signals, while tuning up the inhibitory signals. It doesn't actually stop either, and it certainly doesn't interfere with the signalling process...
Thanks. This comment and your other comments have made me substantially reduce my confidence in some form of cryonics working.
We are deep into guessing territory here, but I would think that coarser option (magnesium, phosphorylation states, other modifications, and presence and binding status of other cofactors, especially GTPases) would be sufficient. Certainly for a simulated upload.
No, I don't work with Ed. I don't use optogenetics in my work, although I plan to in not too distant future.
All of it! Coma is not a state where temporal resolution is lost!
You can silence or deactivate neurons in thousands of ways, by altering one or more signaling pathways within the cells, or by blocking a certain channel. The signaling slows down, but it doesn't stop. Stop it, and you kill the cell within a few minutes; and even if you restart things, signaling no longer works the way it did before.
I don't believe so. Distortion of the membranes and replacement of solvent irretrievably destroys information that I believe to be essential to the structure of the mind. I don't think that would ever be readable into anything but a pale copy of the original person, no matter what kind of technological advance occurs (information simply isn't there to be read, regardless of how advanced the reader may be).
This was supposed to be a quick side-comment. I have now promised to eventually write a longer text on the subject, and I will do so - after the current "bundle" of texts I'm writing is finished. Be patient - it may be a year or so. I am not prepared to discuss it at the level approaching a scientific paper; not yet.
Keep in mind two things. I am in favor of life extension, and I do not want to discourage cryonic research (we never know what's possible, and research should go on).
Thanks. While a scientific paper would be wonderful, even a blog post would be a huge step forward. In so far as a technical case has been made against cryonics, it is either Martinenaite and Tavenier 2010, or it is technically erroneous, or it is in dashed-off blog comments that darkly hint and never get into the detail. The bar you have to clear to write the best ever technical criticism of cryonics is a touch higher than it was when I first blogged about it, but still pretty low.
Perhaps a better definition would help: I'm thinking about active zones within a synapse. You may have one "synapse" which has two or more active zones of differing sizes (the classic model, Drosophila NMJ, has many active zones within a synapse). The unit of integration (the unit you need to understand) is not always the synapse, but is often the active zone itself.
In general, uploading a C. elegans, i.e. creating an abstract artificial worm? Entirely doable. Will probably be done in not-too-distant future.
Uploading a particular C. elegans, so that the simulation reflects learning and experiences of that particular animal? Orders of magnitude more difficult. Might be possible, if we have really good technology and are looking at the living animal.
Uploading a frozen C. elegans, using current technology? Again, you might be able to create an abstract worm, with all the instinctive behaviors, and maybe a few particular...
Local ion channel density (i.e. active zones), plus the modification status of all those ion channels, plus the signalling status of all the presynaptic and postsynaptic modifiers (including NO and endocannabinoids).
You see, knowing the strength of all synapses for a particular neuron won't tell you how that neuron will react to inputs. You also need temporal resolution: when a signal hits the synapse #3489, what will be the exact state of that synapse? The state determines how and when the signal will be passed on. And when the potential from that input ...
I agree with you on both points. And also about the error bars - I don't think I can "prove" cryonics to be pointless.
But one has to make decisions based on something. I would rather build a school in Africa than have my body frozen (even though, to reiterate, I'm all for living longer, and I do not believe that death has inherent value).
Biggest obstacles are membrane distortions, solvent replacement and signalling event interruptions. Mind is not so much written into the structure of the brain as into the structure+dynamic activity. In a sense...
I'll eventually organize my thoughts in something worthy of a post. Until then, this has already gone into way more detail than I intended. Thus, briefly:
The damage that is occurring - distortion of membranes, denaturation of proteins (very likely), disruption of signalling pathways. Just changing the exact localization of Ca microdomains within a synapse can wreak havoc, replacing the liquid completely? Not going to work.
I don't necessarily think that low temps have anything to do with denaturation. Replacing the solvent, however, would do it almost unav...
Whether "working memory" is memory at all, or whether it is a process of attentional control as applied to normal long-term memory... we don't know for sure. So in that sense, you are totally right.
But what is the exact nature of the process is, perhaps strangely, unimportant. The question is whether the process can be enhanced, and I would say that the answer is very likely to be yes.
Also, keep in mind that working memory enhancement scenario is just one I pulled from thin air as an example. The larger point is that we are rapidly gaining the a...
It would appear that all of us have very similar amounts of working memory space. It gets very complicated very fast, and there are some aspects that vary a lot. But in general, its capacity appears to be the bottleneck of fluid intelligence (and a lot of crystallized intelligence might be, in fact, learned adaptations for getting around this bottleneck).
How superior would it be? There are some strong indication that adding more "chunks" to the working space would be somewhat akin to adding more qubits to a quantum computer: if having four "...
Um...there is quite a bit of information. For instance, one major hurdle was ice crystal formation, which has been overcome - but at the price of toxicity (currently unspecified, but - in my moderately informed guesstimate - likely to be related to protein misfolding and membrane distortion).
We also have quite a bit of knowledge of synaptic structure and physiology. I can make a pretty good guess at some of the problems. There are likely many others (many more problems that I cannot predict), but the ones I can are pretty daunting.
Ok, now we are squeezing a comment way too far. Let me give you a fuller view: I am a neuroscientist, and I specialize in the biochemistry/biophysics of the synapse (and interactions with ER and mitochondria there). I also work on membranes and the effect on lipid composition in the opposing leaflets for all the organelles involved.
Looking at what happens during cryonics, I do not see any physically possible way this damage could ever be repaired. Reading the structure and "downloading it" is impossible, since many aspects of synaptic strength an...
No doubt you can identify particular local info that is causally effective in changing local states, and that is lost or destroyed in cryonics. The key question is the redundancy of this info with other info elsewhere. If there is lots of redundancy, then we only need one place where it is held to be preserved. Your comments here have not spoken to this key issue.
If you have a technical argument against cryonics, please write it up as an actual blog post, ideally under your real name so you can flash your credentials. It will be the most substantial essay arguing for such a point ever written: see my blog. I'm pretty convinced that if there was really a strong argument of the sort you're trying to make, someone would already have done this, so I take it as strong evidence that they haven't.
Fascinating. I've been waiting for a while for a well-educated neuroscientist to come here, as I think there are a lot of interesting questions that hinge on issues in neuroscience that are at least hard for me to answer (my only exposure to it is a semester-long class in undergrad). In particular, I'd be interested to know what level of resolution you think would be necessary to simulate a brain to actually get reasonable whole-brain emulations (for instance, is neuronal population level enough? Or do we need to look at individual neurons? Or even further, to look at local ion channel density on the membrane?)
Let me add to your description of the "Loci method" (also the basis of ancient Ars Memoria). You are using spatial memory (which is probably the evolutionarily oldest/most optimized) to piggyback the data you want to memorize.
There is an easier way for people who don't do that well in visualization. Divide a sheet of paper into areas, then write down notes on what you are trying to remember. Make areas somewhat irregular, and connect them with lines, squiggles, or other unique markers. When you write them, and when you look them over, make note o...
I can try, but the issue is too complex for comments. A series of posts would be required to do it justice, so mind the relative shallowness of what follows.
I'll focus on one thing. An artificial intelligence enhancement which adds more "spaces" to the working memory would create a human being capable of thinking far beyond any unenhanced human. This is not just a quantitative jump: we aren't talking someone who thinks along the same lines, just faster. We are talking about a qualitative change, making connections that are literally impossible to...
? No.
I fully admitted that I have only an off-the-cuff estimation (i.e. something I'm not very certain about).
Then I asked you if you have something better - some estimate based in reality?
Good point. I'm trying to cast a wide net, to see whether there are highly transferable skills that I haven't considered before. There are no plans (yet), this is simply a kind of musing that may (or may not) become a basis for thinking about plans later on.
What are the numbers that lead to your 1% estimate?
I will eventually sit down and make a back of the envelope calculation on this, but my current off-the-cuff estimate is about twenty (yes, twenty) orders of magnitude lower.
I see. So if you made a billion trillion independent statements of the form "cryonics won't work", on topics you were equally sure about, you'd be pretty confident you were right on all of them?
If what you say were true - we "never cured cancer in small mammals" - then yes, the conclusion that cancer research is bullshit would have some merit.
But since we did cure a variety of cancers in small mammals, and since we are constantly (if slowly) improving both length of survival and cure rates in humans, the comparison does not stand.
(Also: the integration unit of human mind is not the synapse; it is an active zone, a molecular raft within a synapse. My personal view, as a molecular biophysicist turned neuroscientist, is that freezing dam...
Very good. Objection 2 in particular resonates with my view of the situation.
One other thing that is often missed is the fact that SI assumes that development of superinteligent AI will precede other possible scenarios - including the augmented human intelligence scenario (CBI producing superhumans, with human motivations and emotions, but hugely enhanced intelligence). In my personal view, this scenario is far more likely than the creation of either friendly or unfriendly AI, and the problems related to this scenario are far more pressing.
The possession of knowledge does not kill the sense of wonder and mystery. There is always more mystery.
-- Anais Nin
I see your point, but I'll argue that yes, crowdsourcing is the appropriate term.
Google may be the collective brain of the entire planet, but it will give you only those results you search for. The entire idea here is that you utilize things you can't possibly think of yourself - which includes "which terms should I put into the Google search."
I may have to edit the text for clarification. In fact, I'm going to do so right now.
True enough. Anything can be overdone.
It's not, not necessarily. There isn't as much research on help-seeking as there should be, but there are some interesting observations.
I'm failing to find the references right now, so take with several grains of salt, but this is what I recall: asking for assistance does not lower status, and might even enhance it; while asking for complete solution is indeed status-lowering. I.e. if you ask for hints or general help in solving a problem, that's ok, but if you ask for someone to give you an answer directly, that isn't.
But all of that is a bit beside the...
Sure.
Otte, Dialogues Clin Neurosci. 2011;13(4):413-21. Driessen and Hollon, Psychiatr Clin North Am. 2010 Sep;33(3):537-55. Flessner, Child Adolesc Psychiatr Clin N Am. 2011 Apr;20(2):319-28. Foroushani et al. BMC Psychiatry. 2011 Aug 12;11:131.
Books, hmm. I have not read it myself, but I heard that Leahy's "Cognitive Therapy Techniques: A Practitioner's Guide" is well-regarded. Very commonly recommended less-professional book is Greenberger and Padesky's "Mind over Mood."
I'm not aware of any research in this area. It appears plausible, but there could be many confounding factors.
Yes. If we have an AGI, and someone sets forth to teach it how to be able to lie, I will get worried.
I am not worried about an AGI developing such an ability spontaneously.