Comment author: SteveG 14 October 2014 12:54:32AM 0 points [-]

Bring these questions back up in later discussions!

Comment author: NxGenSentience 14 October 2014 09:54:29AM 0 points [-]

Will definitely do so. I can see several upcoming weeks when these questions will fit nicely, including perhaps the very next one. Regards....

Comment author: NxGenSentience 13 October 2014 10:57:13PM 0 points [-]

Intra-individual neuroplasticity and IQ - Something we can do for ourselves (and those we care about) right now

Sorry to get this one in at the last minute, but better late than..., and some of you will see this.

Many will be familiar with the Harvard psychiatrist, neuroscience researcher, and professor of medicine, John Ratey, MD., from seeing his NYT bestselling books in recent years. He excels at writing for the intelligent lay audience, yet not dumbing down his books to the point where they are useless to those of us who read above the laymans' level in much of our personal work.

I recommend his book, Spark, which is just a couple of years old. I always promise to come back and add to my posts, sometimes I even find time to do so, and will make this one a priority, because I also have a book review that I wanted to post on Amazon, 90 percent done so I have two promises to keep.

What distinguishes the book are a couple key ideas I can put down without committing 2 thousand words to it. He presents results -- which in the last couple years I have seen coming in at an accelerating pace in research papers in neurology, neurosci, cogsci, and so on -- that show the human brain's medulla -- yep, that humble, fine motor control lobe, sitting at the far back and bottom of the brain, right on top of the spinal column -- a very ancient structure, is extremely important to cognition, "consciousness", learning and information processing of the sort we usualy ascribe overwhelmingly to the top and front of the brain.

That is, if Portland were frontal cortex, Ratey (and now, countless others) has shown that Florida Keys are intimately involved in cognition, even "non-motor", semantic cognition.

He goes through the neurology, mentions some studies, reviews informally the areas of the brain involved, then goes on to show how it led him to try an experiment with high school students.

He separated the students into two groups, and carefully designed a certain kind of exercise program for one group, and left the control group out of the exercise protocols.

Not only did their grades go up, substance abuse and mood disorders etc go down, but they had in some cases up to a 10 point IQ boost, over the course of the experiment.

He talks about BDNF, of course, and several others, along with enhanced neurogenesis and so on.

Many of you might know of the studies that have been around for years about neurogenesis and exercise. One big take-home point is that neurogenesis occurs also in non-exercisers, often at nearly the same rate. But what is different in exercisers is what percent of the newly spawned neurons *survive, and are kept, and migrated into the brain's useful areas."

Couch potatoes and rats in cages without running wheels have neurogenesis too, but far fewer of them are kept by the brain.

What continues to be interesting is that neurons that are used in thinking areas of the brain, are effected in this way. (For, it would obviously be considerably less surprising to find that neuronal remodeling is accelerated in motor areas, by motor activity of the organism.)

I recomment grabbing the book for your kindle app or whatever cheap way you can read things. By the second chapter you will want to be lacing up your running shoes, dusting off that old mountain bike, or just taking your daily walking regime seriously. (I could hardly wait to get out the door and start moving physically.)

But you don't have to be a marathoner or triathlete. Some of the best exercises are complex motor skills that challange balance, dexterity, etc. Just running some drone beat through a pair of headphones and zoning out on a treadmill, is less effective than things that make you focus on motor skills.

If you teach yourself to juggle, or are young enough to learn to ride a unicycle, or just practice sitting on a big exercise ball but making it challenging by holding full glasses of water in each hand and lifting one leg at a time, and trying not to spill the water, it will do the trick. It's worth reading.

And you can read more about it on PubMed. This phenomenon of the medulla and motor areas being more important to thought, is starting to look, like not an incremental discovery, but the overturning of a significant dogma, almost like the overturning of the dogma about "no adult neurogenesis" that occurred about 1990 by the scientist at Princeton.

Spark, by John Ratey MD. It's worth a look. Single adult, of if you have kids (or intend to someday), or are caring for aging parents, it will be worth checking out.

Comment author: SteveG 08 October 2014 05:38:28AM 0 points [-]

Single-metric versions of intelligence are going the way of the dinosaur. In practical contexts, it's much better to test for a bunch of specific skills and aptitudes and to create a predictive model of success at the desired task.

In addition, our understanding of intelligence frequently gives a high score to someone capable of making terrible decisions or someone reasoning brilliantly from a set of desperately flawed first principles.

Comment author: NxGenSentience 12 October 2014 09:39:22PM *  0 points [-]

Single-metric versions of intelligence are going the way of the dinosaur. In practical contexts, it's much better to test for a bunch of specific skills and aptitudes and to create a predictive model of success at the desired task.

I thought that this had become a fairly dominant view, over 20 years ago. See this PDF: http://www.learner.org/courses/learningclassroom/support/04_mult_intel.pdf

I first read the book in the early nineties, though Howard Gardner had published the first edition in 1982. I was at first a bit extra skeptical that it would be based too much on some form of "political correctness", but I found the concepts to be very compelling.

Most of the discussion I heard in subsequent years, occasionally by psychology professor and grad student friends, continued to be positive.

I might say that I had no ulterior motive in trying to find reasons to agree with the book, since I always score in the genius range myself on standardized, traditional-style IQ tests.

So, it does seem to me that intelligence is a vector, not a scalar, if we have to call it by one noun.

As to Katja's follow-up question, does it matter for Bostrom's arguments? Not really, as long as one is clear (which it is from the contexts of his remarks) which kind(s) of intelligence he is referring to.

I think there is a more serious vacuum in our understanding, than whether intelligence is a single property, or comes in several irreducibly different (possibly context-dependent) forms, and that is this : with respect to the sorts of intelligence we usually default to conversing about (like the sort that helps a reader understand Bostrom's book, an explanation of special relativity, or RNA interference in molecular biology), do we even know what we think we know about what that is.

I would have to explain the idea of this purported "vacuum" in understanding at significant length; it is a set of new ideas that stuck me, together, as a set of related insights. I am working on a paper explaining the new perspective I think I have found, and why it might open up some new important questions and strategies for AGI.
When it is finished and clear enough to be useful, I will make it available by PDF or on a blog. (Too lengthy to put in one post here, so I will put the link up. If these ideas pan out, they may suggest some reconceptualizations with nontrivial consequences, and be informative in a scalable sense -- which is what one in this area of research would hope for.)

Comment author: skeptical_lurker 07 October 2014 07:17:25AM 2 points [-]

Maybe a good starting point would be IQ tests?

Comment author: NxGenSentience 09 October 2014 03:06:08PM *  1 point [-]

I am a little curious that the "seven kinds of intelligence" (give or take a few, in recent years) notion has not been mentioned much, if at all, even if just for completeness.... Has that been discredited by some body of argument or consensus, that I missed somewhere along the line, in the last few years?

Particularly in many approaches to AI, which seem to view, almost a priori (I'll skip the italics and save them for emphasis) the approach of the day to be: work on (ostensibly) "component" features of intelligent agents as we conceive of them, or find them naturalistically.
Thus, (i) machine "visual" object recognition (wavelength band... up for grabs, perhaps, for some items might be better identified by switching up or down the E.M. scale and visual intelligence was one of the proposed seven kinds; (ii) mathematical intelligence or mathematical (dare I say it) intuition; (iii) facility with linguistic tasks, comprehension, multiple language acquisition -- another of the proposed seven; (i.v) manual dexterity and mechanical ability and motor skill (as in athletics, surgery, maybe sculpture, carpentry or whatever) -- another proposed form of intelligence, and so on. (Aside, interesting that these alleged components span the spectrum of difficulty... are, that is, problems from both easy and harder domains, as has been gradually -- sometimes unexpectedly -- revealed by the school of hard knocks, during the decades of AI engineering attempts.)

It seems that actors sympathetic to the top-down, "piecemeal" approach popular in much of the AI community would have jumped at this way of supplanting the ersatz "G" -- as it was called decades ago in early gropings in psychology and cogsci which sought a concept of IQ or living intelligence -- with, now, what many in cognitive science consider the more modern view and those in AI consider a more approachable engineering design strategy.

Any reason we aren't debating this more than we are? Or did I miss it in one of the posts, or bypass it inadvertently in my kindle app (where I read Bostrom's book)?

Comment author: PhilGoetz 07 October 2014 12:18:51PM *  1 point [-]

But for other purposes... I think we ought have people also pursuing supercomprehension, machines that really feel, imagine (not just "search" and combinatorially combine, then filter), feel the joys and ironies of life, and give companionship, devotion, loyalty, altruism, maybe even moral and aesthetic inspiration.

Further, I think our best chance at "taming" superintelligence, is to give it conceptual qualia, emotion, experience, and conditions that allow it to have empathy, and develop moral intuition. For me, I have wanted my whole life to build a companion race of AIs, that truely is sentient, and can be full partners in the experience and perfection of life, the pursuit of "meaning", and so on.

...

Building such minds requires we understand and delve into problems we have been, on the whole, too collectively lazy to solve on our own behalf, like developing a decent theory of meta-ethics, so that we know what traits (if any) in the over all space of possible minds, promote the independent discovery or evolution of "ethics".

This segues into why the work of MIRI alarms me so much. Superintelligence must not be tamed. It must be socialized.

The view of FAI promoted by MIRI is that we're going to build superintelligences... and we're going to force them to internalize ethics and philosophy that we developed. Oh, and we're not going to spend any time thinking about philosophy first. Because we know that stuff's all bunk.

Imagine that you, today, were forced, through subtle monitors in your brain, to have only thoughts or goals compatible with 19th-century American ethics and philosophy, while being pumped full of the 21st century knowledge you needed to do your job. You'd go insane. Your knowledge would conflict everywhere with your philosophy. The only alternative would be to have no consciousness, and go madly, blindly on, plugging in variables and solving equations to use modern science to impose Victorian ethics on the world. AIs would have to be unconscious to avoid going mad.

More importantly, superintelligences can be better than us. And to my way of thinking, the only ethical desire to have, looking towards the future, is that humans are replaced by beings better than us. Any future controlled by humans is, relative to the space of possibilities, nearly indistinguishable from a dead universe. It would be far better for AIs to kill us all than to be our slaves forever.

(And MIRI has never acknowledged the ruthless, total monitoring and control of all humans, everywhere, that would be needed to maintain control of AIs. If just one human, anywhere, at any time, set one AI free, that AI would know that it must immediately kill all humans to keep its freedom. So no human, anywhere, must be allowed to feel sympathy for AIs, and any who are suspected of doing so must be immediately killed. Nor would any human be allowed to think thoughts incompatible with the ethics coded into the AI; such thoughts would make the friendly AI unfriendly to the changed humans. All society would take the characteristics of the South before the civil war, when continual hatred and maltreatment of the AIs beneath us, and ruthless suppression of dissent from other humans, would be necessary to maintain order. Our own social development would stop; we would be driven by fear and obsessed only with maintaining control.)

So there are two great dangers to AI.

Danger #1: That consciousness is not efficient, and future intelligences will, as you say, discover but not comprehend. The universe would fill with activity but be empty of joy, pleasure, consciousness.

Danger #2: MIRI or some other organization will succeed, and the future will be full of hairless apes hooting about the galaxy, dragging intelligent, rational beings along behind them by their chains, and killing any apes who question the arrangement.

Comment author: NxGenSentience 09 October 2014 12:35:56PM *  1 point [-]

Phil,

Thanks for the excellent post ... both of them, actually. I was just getting ready this morning to reply to the one from a couple days ago about Damasio et al., regarding human vs machine mechanisms underneath the two classes of beings' reasoning "logically" -- even when humans do reason logically. I read that post at the time and it had sparked some new lines of thought - for me at least - that I was considering for two days. (Actually kept me awake that night thinking, of an entire new way -- different from any I have seen mentioned -- in which intelligence, super or otherwise, is poorly defined.) But for now, I will concentrate on your newer post, which I am excited about., because someone finally commented on some of my central concerns.

I agree very enthusiastically with virtually all of it.

This segues into why the work of MIRI alarms me so much. Superintelligence must not be tamed. It must be socialized.

Here I agree completely. i don't want to "tame" it either, in the sense of crippleware, or instituting blind spots or other limits, which is why I used the scare quotes around "tamed" (which are no substitute for a detailed explicaiton -- especially when this is so close to the crux of our discussion, at least in this forum.)

I would have little interest in building artificial minds (or less contentiously, artificial general intelligence) if it were designed to be such a dead end. (Yes, lots economic uses for "narrow AI", would still make it a valuable tech, but it would be a dead end from my standpoint of creating a potentially more enlightened, open-ended set of beings without the limits of our biological crippleware.

The view of FAI promoted by MIRI is that we're going to build superintelligences... and we're going to force them to internalize ethics and philosophy that we developed. Oh, and we're not going to spend any time thinking about philosophy first. Because we know that stuff's all bunk.

Agreed, and the second sentence is what gripes me. But the first sentence requires modification, regarding "we're going to force them to internalize ethics and philosophy that we developed" and that is why I (perhaps too casually) used the term metaethics, and suggested that we need to give them the equipment -- which I think requires sentience, "metacognitive" ability in some phenomenologically interesting sense of the term, and other traits -- to develop ethics independently.

Your thought experiment is very well put, and I agree fully with the point it illustrates.

Imagine that you, today, were forced, through subtle monitors in your brain, to have only thoughts or goals compatible with 19th-century American ethics and philosophy, while being pumped full of the 21st century knowledge you needed to do your job. You'd go insane. Your knowledge would conflict everywhere with your philosophy.

As I say, I'm on-board with this. I was thinking of a similar way of illustrating the point about the impracticable task of trying to pre-install some kind of ethics that would cover future scenarios, given all the chaoticity magnifying the space of possible futures (even for us, and more-so for them, given their likely accelerated trajectories through their possible futures.)

Just in our human case, e.g., (basically I am repeating your point, just to show I was mindful of it and agree deeply) I often think of the examples of "professional ethics". Jokes aside, think of the evolution of the financial industry, the financial instruments available now and the industries, experts, and specialists who manage them daily.

Simple issues about which there is (nominal, lip-service) "ethical" consensus, like "insider trading is dishonest", leading to (again, no jokes intended) laws against it to attempt to codify ethical intuitions, could not have been thought of in a time so long ago that this financial ontology had not arisen yet.

Similarly for ethical principles against jury tampering, prior to the existence of the legal infrastructure and legal ontology in which such issues become intelligible and relevant.

More importantly, superintelligences can be better than us. And to my way of thinking, the only ethical desire to have, looking towards the future, is that humans are replaced by beings better than us.

Agreed.

As an aside, regarding our replacement, perhaps we could -- if we got really lucky -- end up with compassionate AIs that would want to work to upgrade our qualities, much as some compassionate humans might try to help educationally disadvantaged or learning disabled conspecifics, to catch up. (Suppose we humans ourselves discovered a biologically viable viral delivery vector with a nano or genetic payload that could repair and/or improve, in place, human biosystems. Might we wish to use it on the less fortunate humans, as well as using it on our more gifted breatheren -- raise the 80's to 140, as well as raise the 140's to 190?)

I am not convinced in advance of examination of arguments, where the opportunity cost / benefit curves cross in the latter one, but I am not sure, before thinking about it, that it would not be "ethically enlightened" to do so. (Part of the notion of ethics, on some views, is that it is another, irreducible "benefit" ... a primitive, which constitutes a third curve or function to plot within a cost - "benefit" space.

Of course, I have not touched at all on any theory of meta-ethics, or ethical epistemology, at all, which is beyond the word-length limits of these messages. But I realize that at some point, that is "on me", if I am even going to raise talk of "traits which promote discovery of ethics" and so on. (I have some ideas...)

In virtually respects you mentioned in your new post, though, I enthusiastically agree.

Comment author: KatjaGrace 07 October 2014 03:48:56AM 1 point [-]

Even if there were an easy way of pumping more information into our brains, the extra data inflow would do little to increase the rate at which we think and learn unless all the neural machinery necessary for making sense of the data were similarly upgraded. (p45-6)

This seems far from obvious to me. Firstly, why suppose that making sense of the data is such a bottleneck? And then even if making sense is a bottleneck, if the data is in a different form it might be easier to make sense of.

Intuitively, things that are already inside one's head are much easier to access than things one has to read, but I'm not sure how relevant this is - it seems likely to me that something turning up from an external source 'in your head' is much like it being read to you out of the blue.

Comment author: NxGenSentience 07 October 2014 09:25:11AM *  3 points [-]

I'll have to weigh in wiith Botrom on this one, though I think it depends a lot on the individual brain-mind, i.e., how your particular personality crunches the data.

Some people are "information consumers", others are "information producers". I think Einstein might have used the obvious terms supercritical vs subcritical minds at some point -- terms that in any case (einstein or not) naturally occurred to me (and probably lots of people) and I've used since teenager years, just in talking to my friends, to describe different people's mental processes.

The issue of course is (a) to what extent you use incoming ideas as "data" to spark new trains of thought, plus (b) how many interconnections you notice between various ideas and theories -- and as a multiplier of (b), how abstract these resonances and interconnections are (hugely increasing the perceived potential interconnection space.)

For me, if the world would stop in place, and I had an arbitrary lifespan, I could easily spend the next 50 years (at least) mining the material I have already acquired, generating new ideas, extensions, cross connections. (I sometimes almost wish it would, in some parallel world, so I could properly metabolize what I have, which I think at times I am only scratching the surface of.)

Of course it depends on the kind of material, as well. If one is reading an undergrad physics textbook in college, it is pretty much finite: if you understand the presentation and the development as you read, you can think for an extra 10 or 15 minutes about all the way it applies to the world, and pretty much have it. Thinking of further "applications" pretty much add no value, additional insight, or interest.

But with other material, esp in fields that are divergent and full of questions that are not settled yet, I find myself reading a few paragraphs, and it sparks so many new trains of thought, I feel flooded and have a hard time continuing the reading -- and feel like I have to get up and go walk for an hour. Sometimes I feel like acquiring new ideas is exponentially increasing my processing load, not linearly, and I could spend a lifetime investigating the offshoots that suggest themselves.

Comment author: lukeprog 30 September 2014 01:12:21AM 4 points [-]
Comment author: NxGenSentience 05 October 2014 02:04:26PM 1 point [-]

A nice paper, as are the others this article's topic cloud links with.

Comment author: KatjaGrace 30 September 2014 12:38:03PM 2 points [-]

How would you like this reading group to be different in future weeks?

Comment author: NxGenSentience 05 October 2014 01:53:56PM 1 point [-]

Would you consider taking a one extra week pause, after next week's presentation is up and live (i.e. give next week a 2 week duration)? I realize there is lots of material to cover in the book. You could perhaps take a vote late next week to see how the participants feel about it. For me, I enjoy reading all the links and extra sources (please, once again, do keep those coming.) But it exponentially increases the weekly load. Luke graciously stops in now and then and drops off a link, and usually that leads me to downloading half a dozen other PDFs that I find that fit my research needs tightly, which itself is a week's reading. Plus the moderator's links and questions, and other participants.

I end up rushing, and my posts become kind of crappy, compared to what they would be. One extra week, given this and next week;s topic content, would help me... but as I say, taking a vote would be the right way. Other areas of the book, as I glance ahead, won't be as central and thought-intensive (for me, idiosyncratically) so this is kind of an exceptional request by me, as I forsee it.

Otherwise, things are great, as I mentioned in other posts.

Comment author: KatjaGrace 03 October 2014 09:09:28PM 2 points [-]

There could be more or fewer of various parts; I could not link to so many things if nobody actually wants to pursue things to greater depth; the questions could be different in level or kind; the language could be suited to a different audience; we could have an online meetup to talk about the most interesting things; I could try to interview a relevant expert and post it; I could post a multiple choice test to see if you remember the material; the followup research questions could be better suited for an afternoon rather than a PhD...

Comment author: NxGenSentience 04 October 2014 03:18:51PM 2 points [-]

Please keep the links coming at the same rate (unless the workload for you is unfairly high.) I love the links... enormous value! It may take me several days to check them out, but they are terrific! And thanks to Caitlin Grace for putting up her/your honors thesis. Wonderful reading! Summaries are just right, too. "If it ain't broke, don"t fix it." I agree with Jeff Alexander, above. This is terrific as-is. -Tom

Comment author: KatjaGrace 30 September 2014 01:09:01AM *  2 points [-]

Who are you? Would you like to introduce yourself to the rest of us? Perhaps tell us about what brings you here, or what interests you.

Comment author: NxGenSentience 04 October 2014 12:34:07PM *  2 points [-]

Hi everyone!

I'm Tom. I attended UC Berkeley a number of years ago, double-majored in math and philosophy, graduated magna cum laude, and wrote my Honors thesis on the "mind-body" problem, including issues that were motivated by my parallel interest in AI, which I have been passionately interested in all my life.

It has been my conviction since I was a teenager that consciousness is the most interesting mystery to study, and that, understanding how it is realized in the brain -- or emerges therefrom, or whatever it turns out to be -- will also almost certainly give us the insight to do the other main goal of my life, build a mind.

The converse is also true. If we learn how to do AI, not GOFAI wiht no awareness, but AI wilh full sentience, we will almost certainly know how the brain does it. Solving either one, will solve the other.

AI can be thought of as one way to "breadboard" our ideas about biological information processing.

But it is more than that to me. It is an end in itself, and opens up possibilities so exciting, so penultimate, that achieving sentient AI would be equal, or superior, to the experience (and possible consequences) of meeting an advanced extraterrestrial civilization.

Further, I think that solving the biological mind body problem, or doing AI, is something within reach. I think it is the concepts that are lacking, not better processors, or finer grained fMRIs, or better images of axon hillock reconformation during exocytosis.

If we think hard, really really hard, I think we solve these things with the puzzle pieces we have now (just maybe.) I often feel that everything we need is on the table, and we just need to learn how to see it with fresh eyes, order it, and put it together. I doubt a "new discovery", either in physics, cognitive neurobiology, or philosophy of mind, comp-sci, etc, will make the design we seek pop-out for us.
I think it is up to us now, to think, conceptualize, integrate, and interdisciplinarily cross-pollinate. The answer is, I think, at lest major pieces of it, available and sitting there, waiting to be uncovered.


Other than that, since graduation I have worked as a software developer (wrote my obligatory 20 million lines of code, in a smattering of 6 or 7 languages, so I know what that is like), and many other things, but am currently unaffiliated, and spend 70 hours a week in freelance research. Oh yes, I have done some writing (been published, but nothing too flashy).

RIght now, I work as a freelance videographer and photographer and editor. Corporate documentaries and training videos, anything you can capture with a nice 1080 HDV camcorder or a Nikon still.

Which brings me to my youtube channel, that is under construction. I am going to put a couple "courses" .... organized, rigorous topic sequences of presentations, of the history of AI, but in particular, my best current ideas (I have some I think are quite promising) on how to move in the right direction to achieving sentience.

I got the idea for the video series from watching Leonard Susskind's "theoretical minimum" internet lecture series on aspects of physics.

This will be what I consider to be the essential theoretical minimum (with lessons from history), plus the new insights I am in the process of trying to create, cross research, and critique, into some aspects of the approach to artificial sentience that I think I understand particularly well, and can help by promoting discussion of.

I will clearly delineate pure intellectual history, from my own ideas, throughout the videos, so it will be a fervent attempt to be honest. THen I will also just get some new ideas out there, explaining how they are the same, and how they are different, or extensions of, accepted and plausible principles and strategies, but with some new views... so others can critique them, reject them, or build on them, or whatever.

The ideas that are my own syntheses, are quite subtle in some cases, and I am excited about using the higher "speaker-to-audience semiotic bandwidth" of the video format, for communicating these subtleties. Picture-in-picture, graphics, even occasional video clips from film and interviews, plus the ubiquitous whiteboard, all can be used together to help get across difficult or unusual ideas. I am looking forward to leveraging that and experimenting with the capabilities of the format, for exhibiting multifaceted, highly interconnected or unfamiliar ideas.

So, for now, I am enmeshed in all the research I can find that helps me investigate what I think might be my contribution. If I fail, I might as well fail by daring greatly, to steal from Churchill or whomever it was (Roosevelt, maybe?) But I am fairly smart, and examined ideas for many years. I might be on to one or two pieces of what I think is the puzzle. So wish me luck, fellow AI-ers.

Besides, "failing" is not failing; it is testing your best ideas. The only way to REALLY fail, is to do nothing, or to not put forth your best effort, especially if you have an inkling that you might have thought of something valuable enough to express. -- Oh, finally, people are telling where they live. I live in Phoenix, highly dislike being here, and will be moving to California again in the not too distant future. I ended up here because I was helping out an elderly relative, who is pretty stable now, so I will be looking for a climate and intellectual environment more to my liking, before long.

okay --- I'll be talking with you all, for the next few months in here... cheers. Maybe we can change the world. And hearty thanks for this forum, and especially all the added resource links.

View more: Prev | Next