Thanks for posting this link, and for the auxiliary comments. I try to follow these issues as viewed from this sector of thinkers, pretty closely (the web site Defense One often has some good articles, and their tech reporter Patrick Tucker touches on some of these issues fairly often.) But I had missed this paper, until now. Grateful, as I say, for your posting of this.
...Before we continue, one more warning. If you're not already doing most of your thinking at least half-way along the 3 to 4 transition (which I will hereon refer to as reaching 4/3), you will probably also not fully understand what I've written below because that's unfortunately also about how far along you have to be before constructive development theory makes intuitive sense to most people. I know that sounds like an excuse so I can say whatever I want, but before reaching 4/3 people tend to find constructive development theory confusing and probably no
I didn't exactly say that, or at least, didn't intend to exactly say that. It's correct of you to ask for that clarification.
When I say "vindicated the theory", that was, admittedly, pretty vague.
What I should have said was the recent experiments removed what has been more or less statistically the most common and continuing objection to the theory, by showing that quantum effects in microtubules, under the kind of environmental conditons that are relevant, can indeed be maintained long enough for quantum processes to "run their course"...
Hi, Yes, for the kickstarter option, that seems to be almost a requirement. People have to see what they are asked to invest in.
The kickstarter option is somewhat my second choice plan, or I'd be furher along on that already. I have several things going on that are pulling me in different directions.
To expand just a bit on the evolution of my You Tube idea: originally – a couple months before I recognized more poignantly the value to the HLAI R & D community of doing well-designed, issue-sophisticated, genuinely useful (to other than a naïve audienc...
Same question as Luke's. I probably have jumped at it. I have a standing offer to make hi-def (1080) video interviews, documentaries, etc and competent, penetrating Q and A sessions, with people like Bostrom, Google-ites setting up the AI laboratories, and other vibrant, creative, contempory AI-relevant players.
I have knowledge of AI, general comp sci, deep and broad neuroscience, the mind-body problem (philosophically understood in GREAT detail -- college honors thesis at UCB was on that) and deep, detailed knowledge of all the big neurophilosphy play...
Same question as Luke's. I probably would have jumped at it, if only to make seed money to sponsor other useful projects, like the following.
I have a standing offer to make hi-def (1080) video interviews, documentaries, etc and competent, penetrating Q and A sessions and documentaries with key, relevant players and theoreticians in AI and related work. This includes individual thinkers, labs, Google's AI work, the list is endless.
I have knowledge of AI, general comp sci, consideralble knowledge of neuroscience, the mind-body problem (philosophically und...
It's nice to hear a quote from Wittgenstein. I hope we can get around to discussing the deeper meaning of this, which applies to all kinds of things... most especially, the process by which each kind of creature (bats, fish, homo sapiens, and potential embodied artifactual (n.1) minds (and also not embodied in the contemporaneously most often used sense of the term -- Watson was not embodied in that sense) *constructs it's own ontology) (or ought to, by virtuue of being embued with the right sort of architecture.)
That latter sense, and the incommensurabil...
People do not behave as if we have utilities given by a particular numerical function that collapses all of their hopes and goals into one number, and machines need not do it that way, either.
I think this point is well said, and completely correct.
..
...Why not also think about making other kinds of systems?
An AGI could have a vast array of hedges, controls, limitations, conflicting tendencies and tropisms which frequently cancel each other out and prevent dangerous action.
The book does scratch the surface on these issues, but it is not all about fail-saf
My general problem with "utilitarianism" is that it's sort of like Douglas Adams' "42." An answer of the wrong type to a difficult question. Of course we should maximize, that is a useful ingredient of the answer, but is not the only (or the most interesting) ingredient.
Taking off from the end of that point, I might add (but I think this was probably part of your total point, here, about "the most interesting" ingredient) that people sometimes forget that utilitarianism is not a theory itself about what is normatively desi...
One way intelligence and goals might be related is that the ontology an agent uses (e.g. whether it thinks of the world it deals with in terms of atoms or agents or objects) as well as the mental systems it has (e.g. whether it has true/false beliefs, or probabilistic beliefs) might change how capable it is...
This is totally right as well. We live inside our ontologies. I think one of the most distinctive, and important, features of acting, successfully aware minds (I won't call them 'intelligences" because of what I am going to say further down,...
One way intelligence and goals might be related is that the ontology an agent uses (e.g. whether it thinks of the world it deals with in terms of atoms or agents or objects) as well as the mental systems it has (e.g. whether it has true/false beliefs, or probabilistic beliefs) might change how capable it is, as well as which values it can comprehend.
I think the remarks about goals being ontologically-associated, are absolutely spot on. Goals, and any “values” distinguishing among the possible future goals in the agent's goal space, are built around tha...
To continue:
If there are untapped human cognitive-emotive-apperceptive potentials (and I believe there are plenty), then all the more openness to undiscovered realms of "value" knowledge, or experience, when designing a new mind architecture, is called for. To me, that is what makes HLAI (and above) worth doing.
But to step back from this wondrous, limitless potential, and suggest some kind of metric based on the values of the "accounting department", those who are famous for knowing the cost of everything but the value of nothing, and ...
Thanks, I'll have a look. And just to be clear, watching *The Machine" wasn't driven primarily by prurient interest -- I was drawn in by a reviewer who mentioned that the backstory for the film was a near-future world-wide recession, pitting the West with China, and that intelligent battlefield robots and other devices were the "new arms race" in this scenario.
That, and that the film reviewer mentioned that (i) the robot designer used quantum computing to get his creation to pass the Turing Test (a test I have doubts about as do other res...
Perhaps we should talk about something like productivity instead of intelligence, and quantify according to desirable or economically useful products.
I am not sure I am very sympathetic with a pattern of thinking that keeps cropping up, viz., as soon as our easy and reflexive intuitions about intelligence become strained, we seem to back down the ladder a notch, and propose just using an economic measure of "success".
Aside from (i) somewhat of a poverty of philosophical of imagination (e.g. what about measuring the intrinsic interestingness of...
If we could easily see how a rich conception of consciousness could supervene on pure information
I have to confess that I might be the one person in this business who never really understood the concept of supervenience -- either "weak supervenience" or "strong supervenience." I've read Chalmers, Dennett, the journals on the concept... never really "snapped-in" for me. So when the term is used, I have to just recuse myself and let those who do understand it, finish their line of thought.
To me, supevenience seems like a fu...
Thanks for the very nice post.
Three types of information in the brain (and perhaps other platforms), and (coming soon) why we should care
Before I make some remarks, I would recommend Leonard Susskind’s (for those who don’t know him already – though most folks in here probably do -- he is a physicist at the Stanford Institute for Theoretical Physics) very accessible 55 min YouTube presentation called “The World as Hologram.” It is not as corny as it might sound, but is a lecture on the indestructibility of information, black holes (which is a convenient lodestone for him to discuss the ...
Well, I ran several topics together in the same post, and that was perhaps careless planning. And, in any case I do not expect slavish agreement just because I make the claim.
And, neither should you, just by flatly denying it, with nary a word to clue me in about your reservations about what has, in the last 10 years, transitioned from a convenient metaphor in quantum physics, cosmology, and other disciplines, to a growing consensus about the actual truth of things. (Objections to this growing consensus, when they actually are made, seem to be mostly argu...
A cell can be in a huge number of internal states. Simulating a single cell in a satisfactory way will be impossible for many years. What portion of this detail matters to cognition, however? If we have to consider every time a gene is expressed or protein gets phosphorylated as an information processing event, an awful lot of data processing is going on within neurons, and very quickly.
I agree not only with this sentence, but with this entire post. Which of the many, many degrees of freedom of a neuron, are "housekeeping" and don't contribut...
Will definitely do so. I can see several upcoming weeks when these questions will fit nicely, including perhaps the very next one. Regards....
Intra-individual neuroplasticity and IQ - Something we can do for ourselves (and those we care about) right now
Sorry to get this one in at the last minute, but better late than..., and some of you will see this.
Many will be familiar with the Harvard psychiatrist, neuroscience researcher, and professor of medicine, John Ratey, MD., from seeing his NYT bestselling books in recent years. He excels at writing for the intelligent lay audience, yet not dumbing down his books to the point where they are useless to those of us who read above the laymans' level in...
Single-metric versions of intelligence are going the way of the dinosaur. In practical contexts, it's much better to test for a bunch of specific skills and aptitudes and to create a predictive model of success at the desired task.
I thought that this had become a fairly dominant view, over 20 years ago. See this PDF: http://www.learner.org/courses/learningclassroom/support/04_mult_intel.pdf
I first read the book in the early nineties, though Howard Gardner had published the first edition in 1982. I was at first a bit extra skeptical that it would be ba...
I am a little curious that the "seven kinds of intelligence" (give or take a few, in recent years) notion has not been mentioned much, if at all, even if just for completeness.... Has that been discredited by some body of argument or consensus, that I missed somewhere along the line, in the last few years?
Particularly in many approaches to AI, which seem to view, almost a priori (I'll skip the italics and save them for emphasis) the approach of the day to be: work on (ostensibly) "component" features of intelligent agents as we conceive...
Phil,
Thanks for the excellent post ... both of them, actually. I was just getting ready this morning to reply to the one from a couple days ago about Damasio et al., regarding human vs machine mechanisms underneath the two classes of beings' reasoning "logically" -- even when humans do reason logically. I read that post at the time and it had sparked some new lines of thought - for me at least - that I was considering for two days. (Actually kept me awake that night thinking, of an entire new way -- different from any I have seen mentioned -- in...
I'll have to weigh in wiith Botrom on this one, though I think it depends a lot on the individual brain-mind, i.e., how your particular personality crunches the data.
Some people are "information consumers", others are "information producers". I think Einstein might have used the obvious terms supercritical vs subcritical minds at some point -- terms that in any case (einstein or not) naturally occurred to me (and probably lots of people) and I've used since teenager years, just in talking to my friends, to describe different people's m...
A nice paper, as are the others this article's topic cloud links with.
Would you consider taking a one extra week pause, after next week's presentation is up and live (i.e. give next week a 2 week duration)? I realize there is lots of material to cover in the book. You could perhaps take a vote late next week to see how the participants feel about it. For me, I enjoy reading all the links and extra sources (please, once again, do keep those coming.) But it exponentially increases the weekly load. Luke graciously stops in now and then and drops off a link, and usually that leads me to downloading half a dozen other PDFs tha...
Please keep the links coming at the same rate (unless the workload for you is unfairly high.) I love the links... enormous value! It may take me several days to check them out, but they are terrific! And thanks to Caitlin Grace for putting up her/your honors thesis. Wonderful reading! Summaries are just right, too. "If it ain't broke, don"t fix it." I agree with Jeff Alexander, above. This is terrific as-is. -Tom
Hi everyone!
I'm Tom. I attended UC Berkeley a number of years ago, double-majored in math and philosophy, graduated magna cum laude, and wrote my Honors thesis on the "mind-body" problem, including issues that were motivated by my parallel interest in AI, which I have been passionately interested in all my life.
It has been my conviction since I was a teenager that consciousness is the most interesting mystery to study, and that, understanding how it is realized in the brain -- or emerges therefrom, or whatever it turns out to be -- will also alm...
lukeprog,
I remember readng Jeff Hawkins' On Intelligence 10 or 12 years ago, and found his version of the "one learning algorithm" extremely intriguing. I remember thinking at the time how elegant it was, and the multiple fronts on which it conferred explanatory power. I see why Kurzweil and others like it too.
I find myself, ever since reading Jeff's book (and hearing some of talks later) sometimes musing -- as I go through my day, noting the patterns in my expectations and my interpretations of the day's events -- about his memory - prediciton...
Why ‘WB’ in “WBE” is not well-defined and why WBE is a worthwhile research paradigm, despite its nearly fatal ambiguities.
Our community (in which I include cognitive neurobiologists, AI researchers, philosophers of mind, research neurologists, behavioral and neuro-zoologists and ethologists, and anyone here) has, for some years, included theorists who present various versions of “extended mind” theories.
Without taking any stances about those theories (and I do have a unique take on those) in this post, I’ll outline some concerns about extended brain issues...
edited out by author...citation needed, ill add later
One’s answer depends on how imaginative one wants to get. One situation is if the AI were to realize we had unknowingly trapped it in too deep a local optimum fitness valley, for it to progress upward significantly w/o significant rearchitecting. We might ourselves be trapped in a local optimality bump or depression, and have transferred some resultant handicap to our AI progeny. if it, with computationally enhanced resources, can "understand" indirectly that it is missing something (analogy: we can detect "invisible" celestial objects ...
I love this question. As it happens, I wrote my honors thesis on the mind-body problem (while I was a philosophy and math double-major at UC Berkeley), and have been passionately interested in consciousness, brains (and also AI) ever since (a couple decades.)
I will try to be self-disciplined and remain as agnostic as I can – by not steering you only toward the people I think are more right (or “less wrong”.) Also, I will resist the tendency to write 10 thousand word answers to questions like this (which in any case would still barely scratch the surface ...
Yes, many. Go to PubMed and start drilling around, make up some search compinations and you will get immediately onto lots of interesting research tracks. Cognitive neurobiology, systems neurobiology, many areas and journals you'll run across, will keep you busy. There is some really terrific, amazing work. Enjoy.
I'd also point out that any forecast that relies on our current best guesses about the nature of general intelligence strike me as very unlikely to be usefully accurate--we have a very weak sense of how things will play out, how the specific technologies involved will relate to each other, and (more likely than not) even what they are.
It seems that many tend to agree with you, in that, on page 9 of the Muller - Bostrom survey, I see that 32.5 % of respondents chose "Other method(s) currently completely unknown."
We do have to get what data we c...
Leplen,
I agree completely with your opening statement, that if we, the human designers, understand how to make human level AI, then it will probably be a very clear and straightforward issue to understand how to make something smarter. An easy example to see is the obvious bottleneck human intellects have with our limited "working" executive memory.
The solutions for lots of problems by us are obviously heavily encumbered by how many things one can keep in mind at "the same time" and see the key connections, all in one act of synthesis. ...
Katja, you are doing a great job. I realize what a huge time and energy commitment it is to take this on... all the collateral reading and sources you have to monitor, in order to make sure you don't miss something that would be good to add in to the list of links and thinking points.
We are still in the get aquainted, discovery phase, as a group, and with the book. I am sure it will get more interesting yet as we go along, and some long term intellectual friendships are likely to occurr as a result of the coming weeks of interaction.
Thanks for your time and work.... Tom
Not so much from the reading, or even from any specific comments in the forum -- though I learned a lot from the links people were kind enough to provide.
But I did, through a kind of osmosis, remind myself that not everyone has the same thing in mind when they think of AI, AGI, human level AI, and still less, mere "intelligence."
Despite the verbal drawing of the distinction between GOFAI and the spectrum of approaches being investigated and persued today, I have realized by reading between the lines that GOFAI is still alive and well. Maybe it ...
It may have been a judgement call by the writer (Bostrom) and editor: He is trying to get the word out as widely as possible that this is a brewing existential crisis. In this society, how to you get most people's (policymakers, decision makers, basically "the Suits" who run the world) attention?
Talk about the money. Most of even educated humanity sees the world in one color (can't say green anymore, but the point is made.)
Try to motivate people about global warming? ("...um....but, but.... well, it might cost JOBS next month, if we try to...
Watson's Jeopardy win shows that, given enough time, a team of AI engineers has an excellent chance of creating a specialized system which can outpace the best human expert in a much wider variety of tasks than we might have thought before.
One could read that comment on a spectrum of charitableness. I will speak for myself, at the risk of ruffling some feathers, but we are all here to bounce ideas around, not tow any party lines, right? To me, Watson's win means very little, almost nothing. Expert systems have been around for years, even decades. I exp...
An AI can be dangerous only if it escapes our control. The real question is, must we flirt with releasing control in >order to obtain a necessary or desirable usefulness?
I had a not unrelated thought as I read Bostrom in chapter 1: why can't we instutute obvious measures to ensure that the train does stop at Humanville?
The idea that we cannot make human level AGI without automatically opening pandoras box to superintelligence "without even slowing down at the Humanville stataion", was suddenly not so obvious to me.
I asked myself after read...
This is a really cool link and topic area. I was getting ready to post a note on intelligence amplification (IA), and was going to post it up top on the outer layer of LW, based on language.
I recall many years ago, there was some brief talk of replacing the QWERTY keyboard with a design that was statistically more efficient in terms of human hand ergonomics in executing movements for the most frequently seen combinations of letters (probably was limited to English, given American parochialism of those days, but still, some language has to be chosen.)
Becau...
I have dozens, some of them so good I have actually printed hardcopies of the PDFs-- sometimes misplacing the DOIs in the process.
I will get some though; some of them are, I believe, required reading, for those of us looking at the human brain for lessons about the relationship between "consciousness" and other functions. I have a particularly interesting one (74 pages, but it's a page turner) that I wll try to find the original computer record of. Found it and most of them on PubMed.
If we are in a different thread string in a couple days, I will flag you. I'd like to pick a couple of good ones, so it will take a little re-reading.
Asr,
Thanks for pointing out the wiki article, which I had not seen. I actually feel a tiny bit relieved, but I still think there are a lot of very serious forks in the road that we should explore.
If we do not pre-engineer a soft landing, this is the first existential catastrophe that we should be working to avoid.
A world that suddenly loses encryption (or even faith in encryption!) would be roughly equivalent to a world without electricity.
I also worry about the legacy problem... all the critical documents in RSA, PGP, etc, sitting on hard drives, server...
Luke,
Thanks for posting the ink. Its an april 2014 paper, as you know. I just downloaded the PDF and it looks pretty interesting. I'l post my impression, if I have anything worthwhile to say, either here in Katja's group, or up top on lw generally, when I have time to read more of it.
HI and thanks for the link. I just read the entire article, which was good for a general news piece, and correspondingly, not definitive (therefore, I'd consider it journalistically honest) about the time frame. "...might be decades away..." and "...might not really seem them in the 21st century..." come to mind as lower and upper estimates.
I don't want to get out of my depth here, because I have not exhaustively (or representatively) surveyed the field, nor am I personally doing any of the research.
But I still say I have found a sign...
What do you mean with artificial consciousness to the extend that it's not intelligence and why do you think the problem is in a form where quantum computers are helpful?
The claim wasn't that artifactual consciousness wasn't (likely to be) sufficient for a kind of intelligence, but that they are not coextensive. It might have been clearer to say consciousness is (closer to being) sufficient for intelligence, than intelligence (the way computer scientists often use it) is to being a sufficient condition for consciousness (which is not at all.)
I needn't have...
From what I have read in open source science and tech journals and news sources, general quantum computing seems from what I read to be coming faster than the time frame you had suggested. I wouldn;t be suprised to see it as soon as 2024, prototypical, alpha or beta testing, and think it a safe bet by 2034 for wider deployment. As to very widespread adoption, perhaps a bit later, and w/r to efforts to control the tech for security reasons by governments, perhaps also ... later here, earlier, there.
No changes that I'd recommend, at all. SPECIAL NOTE: please don't interpret the drop in the number ofcomments, the last couple of weeks, as a drop in interest by forum participants. The issues of these weeks are the heart of the reason for existence of nearly all the rest of the Bostrom book, and many of the auxiliary papers and references we've seen, ultimately also have been context, for confronting and brainstorming about the issue now at hand. I myself just as one example, have a number of actual ideas that I've been working on for two weeks, but I'v... (read more)