Comment author: KatjaGrace 11 November 2014 04:18:12AM 1 point [-]

One way intelligence and goals might be related is that the ontology an agent uses (e.g. whether it thinks of the world it deals with in terms of atoms or agents or objects) as well as the mental systems it has (e.g. whether it has true/false beliefs, or probabilistic beliefs) might change how capable it is, as well as which values it can comprehend. For instance, an agent capable of a more detailed model of the world might tend to perceive more useful ways to interact with the world, and so be more intelligent. It should also be able to represent preferences which wouldn't have made sense in a simpler model.

Comment author: NxGenSentience 13 November 2014 12:30:06AM *  0 points [-]

One way intelligence and goals might be related is that the ontology an agent uses (e.g. whether it thinks of the world it deals with in terms of atoms or agents or objects) as well as the mental systems it has (e.g. whether it has true/false beliefs, or probabilistic beliefs) might change how capable it is...

This is totally right as well. We live inside our ontologies. I think one of the most distinctive, and important, features of acting, successfully aware minds (I won't call them 'intelligences" because of what I am going to say further down, in this message) is this capacity to mint new ontologies as needed, and to do it well, and successfully.

Successfully means the ontological additions are useful, somewhat durable constructs, "cognitively penetrable" to our kind of mind, help us flourish, and give a viable foundation for action that "works" ... as well as not backing us into a local maximum or minimum.... By that I mean this: "successfull" minting of ontological entities enables us to mint additional ones that also "work".

Ontologies create us as much as we create them, and this creative process is I think a key feature of "successful" viable minds.

Indeed, I think this capacity to mint new ontologies and do it well, is largely orthogonal to the other two that Bostrom mentions, i.e. 1) means-end reasoning (what Bostrom might otherwise call intelligence) 2) final or teleological selection of goals from the goal space, and to my way of thinking... 3) minting of ontological entities "successfully" and well.

In fact, in a sense, I would put my third one in position one, ahead of means-end reasoning, if I were to give them a relative dependence. Even though orthogonal -- in that they vary independently -- you have to have the ability to mint ontologies, before means-end reasoning has anything to work on. And in that sense, Katja's suggestion that ontologies can confer more power and growth potential (for more successful sentience to come), is something I think is quite right.

But I think all three are pretty self-evidentally largely orthogonal, with some qualifications that have been mentioned for Bostrom's original two.

Comment author: KatjaGrace 11 November 2014 04:18:12AM 1 point [-]

One way intelligence and goals might be related is that the ontology an agent uses (e.g. whether it thinks of the world it deals with in terms of atoms or agents or objects) as well as the mental systems it has (e.g. whether it has true/false beliefs, or probabilistic beliefs) might change how capable it is, as well as which values it can comprehend. For instance, an agent capable of a more detailed model of the world might tend to perceive more useful ways to interact with the world, and so be more intelligent. It should also be able to represent preferences which wouldn't have made sense in a simpler model.

Comment author: NxGenSentience 13 November 2014 12:00:40AM 0 points [-]

One way intelligence and goals might be related is that the ontology an agent uses (e.g. whether it thinks of the world it deals with in terms of atoms or agents or objects) as well as the mental systems it has (e.g. whether it has true/false beliefs, or probabilistic beliefs) might change how capable it is, as well as which values it can comprehend.

I think the remarks about goals being ontologically-associated, are absolutely spot on. Goals, and any “values” distinguishing among the possible future goals in the agent's goal space, are built around that agent's perceived (actually, inhabited is a better word) ontology.

For example, the professional ontology of a wall street financial analyst includes the objects that he or she interacts with (options, stocks, futures, dividends, and the laws and infrastructure associated with the conceptual “deductive closure” of that ontology.)

Clearly, “final” -- teleological and moral – principles involving approach and avoidance judgments … say, involving insider trading (and the negative consequences at a practical level, if not the pure anethicality, of running afoul of the laws and rules of governance for trading those objects) , are only defined within an ontological universe of discourse, which contains those financial objects and the network of laws and valuations that define – and are defined by -- those objects.

Smarter beings, or even ourselves, as our culture evolves, generation after generation becoming more complex, acquire new ontologies and gradually retire others. Identity theft mediated by surreptitious seeding of laptops in Starbucks with keystroke-logging viruses, is “theft” and is unethical. But trivially in 1510 BCE, the ontological stage on which this is optionally played out did not exist, and thus, the ethical valence would have been undefined, even unintelligible.

That is why, if we can solve the friendlieness problem, it will have to be by some means that gives new minds the capacity to develop robust ethical meta-intuition, that can be recruited creatively, on the fly, as these beings encounter new situations that call upon them to make new ethical judgements.

I happen to be a version of meta -ethical realist, like I am something of a mathematical platonist, but in my position, this is crossed also with a type of constructivist metaethics, apparently like that subscribed-to by John Danaher in his blog (after I followed the link and read it.)
At least, his position sounds like it is similar to mine, although constructivist part of my theory is supplemented with a “weak” quasi-platonist thread, that I am trying to derive from some more fundamental meta-ontological principles (work in progress on that.)

Comment author: NxGenSentience 24 October 2014 04:03:38PM *  -2 points [-]

Perhaps we should talk about something like productivity instead of intelligence, and quantify according to desirable or economically useful products.

I am not sure I am very sympathetic with a pattern of thinking that keeps cropping up, viz., as soon as our easy and reflexive intuitions about intelligence become strained, we seem to back down the ladder a notch, and propose just using an economic measure of "success".

Aside from (i) somewhat of a poverty of philosophical of imagination (e.g. what about measuring the intrinsic interestingness of ideas, or creative output of various kinds... or, even, dare I say beauty if these superintellects happen to find that worth doing [footnote 1]), I am skeptical on grounds of (ii): given the phase change in human society likely to accompany superintelligence (or nano, etc.), what kind of economic system is likely to be around, in the 22nd century, the 23rd.... and so on?

Economics, as we usually use the term, seems as dinosaur-like as human death, average IQs of 100, energy availability problems, the nuclear biological human family (already DOA); having offspring by just taking the genetic lottery cards and shuffling... and all the rest of social institutions based on eons of scarcity -- of both material goods, and information.

Economic productivity, or perceived economic value. seems like the last thing we ought to based intelligence metrics on. (Just consider some the economic impact of professional sports -- hardly a measure of meteoric intellectual achievement.)

[Footnote 1]: I have commented in here before about the possibility that "super-intelligences" might exhibit a few surprises for us math-centric, data dashboard-loving, computation-friendly information hounds.

(Aside: I have been one of them, most of my life, so no one should take offense. Starting far back: I was the president of Mu Alpha Theta, my high school math club, in a high school with an advanced special math program track for mathematically gifted students. Later, while a math major at UC Berkeley, I got virtually straight As and never took notes in class; I just went to class each day, sat in the front row, and payed attention. I vividly remember getting the exciting impression, as I was going through the upper division math courses, that there wasn't anything I couldn't model.)

After graduation from UCB, at one point I was proficient in 6 computer languages. So, I do understand the restless bug, the urge to think of a clever data structure and to start coding ... the impression that everything can be coded, with enough creativity .

I also understand what mathematics is, pretty well. For starters, it is a language. A very, very special language with deep connections to the fabric of reality. It has features that make it one of the few, perhaps only candidate language, for being level of description independent. Natural languages, and technical domain-specific languages are tied to corresponding ontologies and to corresponding semantics' that enfold those ontologies. Math is the most omni-ontological, or meta-ontological language we have (not counting brute logic, which is not really a "language", but a sort of language substructure schema.

Back to math. It is powerful, and an incredible tool, and we should be grateful for the "unreasonable success" it has (and continue to try to understand the basis for that!)

But there are legitimate domains of content beyond numbers. Other ways of experiencing the world's (and the mind's) emergent properties. That is something I also understand.

So, gee, thanks to whomever gave me the negative two points. It says more about you than it does about me, because my nerd "street cred" is pretty secure.

I presume the reader "boos" are because I dared to suggest that a superintelligence might be interested in, um, "art", like the conscious robot in the film I mention below, who spends most of its free time seeking out sketch pads, drawing, and asking for music to listen to. Fortunately, I don't take polls before I form viewpoints, and I stand by what I said.)

Now, to continue my footnote: Imagine that you were given virtually unlimited computational ability, imperishable memory, ability to grasp the "deductive closure" of any set of propositions or principles, with no effort, automatically and reflexively.

Imagine also that you have something similar to sentience or autonomy, and can choose your own goals. SUppose also, say, that your curiosity functions in such a way that "challenges" are more "interesting" to you than activities that are always a fait accompli .

What are you going to do? Plug yourself into the net, and act like an asperger spectrum mentality, compulsively computing away at everything that you can think of to compute?

Are you going to find pi to a hundred million digits of precision?

Invert giant matrices just for something to do?

It seems at least logically and rationally possible that you will be attracted to precisely those activities that are not computational givens before you even begin doing them. You might view the others as pointless, because their solution is preordained.

Perhaps you will be intrigued by things like art, painting, or increasingly beautiful virtual reality simulations for the sheer beauty of them.

In case anyone saw the movie "The Machine" on Netflix, it dramatizes this point, which was interesting. It was, admittedly, not a very deep film; one inclined to do so can find the usual flaws, and the plot device of using a beautiful female form could appear to be a concession to the typically male demographic for SciFi films -- until you look a bit deeper at the backstory of the film (that I mention below.)

I found one thing of interest: when the conscious robot was left alone, she always began drawing again, on sketch pads.

And, in one scene wherein the project leader returned to the lab, did he find "her" plugged-into the internet, playing chess with supercomputers around the world? Working on string theory? Compiling statistics about everything that could conceivably be quantified?

No. The scene finds the robot (in the film, it has sensory responsive skin, emotions, sensory apparati etc. based upon ours) alone in a huge warehouse, having put a layer of water on the floor, doing improvisational dance with joyous abandon, naked, on the wet floor, to loud classical music, losiing herself in the joy of physical freedom, sensual movement, music, and the synethesia of music, light, tactility and the experience of "flow".

The explosions of light leaking through her artificial skin, in what presumably were fiber ganglia throughout her-its body, were a demure suggestion of whole body physical joy of movement, perhaps even an analogue of sexuality. (She was designed partly as an em, with a brain scan process based on a female lab assistant.)

The movie is worth watching just for that scene (please -- it is not for viewer eroticism) and what it suggests to those of us who imagine ourselves overseeing artificial sentience design study groups someday. (And yes, the robot was designed to be conscious, by the designer, hence the addition to the basic design, of the "jumpstart" idea of uploading properties of the scanned CNS of human lab assistant.)

I think we ought to keep open our expectations, when we start talking about creating what might (and what I hope will) turn out to be actual minds.

Bostrom himself raises this possibility when he talks about untapped cognitive abilities that might already be available within the human potential mind-space.

I blew a chance to talk at length about this last week. I started writing up a paper, and realized it was more like a potential PhD dissertation topic, than a post. So I didn't get it into usable, postable form. But it is not hard to think about, is it? Lots of us in here already must have been thinking about this. ... continued

Comment author: NxGenSentience 25 October 2014 05:22:42PM *  -1 points [-]

To continue:

If there are untapped human cognitive-emotive-apperceptive potentials (and I believe there are plenty), then all the more openness to undiscovered realms of "value" knowledge, or experience, when designing a new mind architecture, is called for. To me, that is what makes HLAI (and above) worth doing.

But to step back from this wondrous, limitless potential, and suggest some kind of metric based on the values of the "accounting department", those who are famous for knowing the cost of everything but the value of nothing, and even more famous for, by default, often derisively calling their venal, bottom-line, unimaginative dollars and cents worldview a "realistic" viewpoint (usually a constraint based on lack of vision) -- when faced with pleas for SETI grants, or (originally) money for the National Supercomputing Grid, ..., or any of dozen of other projects that represent human aspiration at its best -- seems, to me, to be shocking.

I found myself wondering if the moderator was saying that with a straight face, or (hopefully) putting on the hat of a good interlocutor and firestarter, trying to flush out some good comments, because this week had a diminished activity post level.

Irrespective of that, another defect, as I mentioned, is that economics as we know it will prove to be relevant for an eyeblink, in the history of the human species (assuming we endure.) We are closer to the end of this kind of scarcity-based economics, than the beginning (assuming even one or more singularity style scenarios come to pass, like nano.)

It reminds me of the ancient TV series Star Treck New Gen, in an episode wherein someone from our time ends up aboard the Enterprise of the future, and is walking down a corridor speaking with Picard. The visitor asks Picard something like "who pays for all this", as the visitor is taking in the impressive technology of the 23rd century vessel.

Picard replys something like, "The economics of the 23 century are somewhat different from your time. People no longer arrange their lives around the constraint of amassing material goods...."

I think it will be amazing if even in 50 years, economics as we know it, has much relevance. Still less so in future centuries, if we -- or our post-human selves are still here.

Thus, economic measures of "value" or "success" are about the least relevant metric we ought to be using, to assess what possible critaris we might give to track evolving "intelligence", in the applicable, open-ended, future-oriented sense of the term.

Economic --- i.e. marketplace-assigned "value" or "success" is already pretty evidently a very limiting, exclusionary way to evaluate achievement.

Remember: economic value is assigned mostly by the center standard deviation of the intelligence bell curve. This world, is designed BY, and FOR, largely, ordinary people, and they set the economic value of goods and services, to a large extent.

Interventions in free market assignment of value are mostly made by even "worse" agents... greed-based folks who are trying to game the system.

Any older people in here might remember former Senator William Proxmire's "Golden Fleece" award in the United States. The idea was to ridicule any spending that he thought was impractical and wasteful, or stupid.

He was famous for assigning it to NASA probes to Mars, the Hubble Telescope (in its several incarnations), the early NSF grants for the Human Genome project..... National Institute for Mental Health programs, studies of power grid reliability -- anything that was of real value in science, art, medicine... or human life.

He even wanted to close the National Library of Congress, at one point.

THAT, is what you get when you have ECONOMIC measures to define the metric of "value", intelligence or otherwise.

So, it is a bad idea, in my judgement, any way you look at it.

Ability to generate economic "successfulness" in inventions, organization restructuring... branding yourself of your skills, whatever? I don't find that compelling.

Again, look at professional sports, one of the most "successful" economic engines in the world. A bunch of narcissistic, girl-friend beating pricks, racist team owners... but by economic standards, they are alphas.

Do we want to attach any criterion -- even indirect -- of intellectual evolution, to this kind of amoral morass and way of looking at the universe?


Back to how I opened this long post. If our intuitions start running thin, that should tell us we are making progress toward the front lines of new thinking. When our reflexive answers stop coming, that is when we should wake up and start working harder.

That's because this --- intelligence, mind augmentation or redesign, is such a new thing. The ultimate opening-up of horizons. Why bring the most idealistically-blind, suffocatingly concrete worldview, along into the picture, when we have a chance at transcendence, a chance to pursue infinity?

We need new paradigms, and several of them.

Comment author: Lumifer 24 October 2014 04:28:38PM 0 points [-]

If you are fine with fiction, I think the Minds from Iain Banks Culture are a much better starting point than dancing naked girls. In particular, the book Excession describes the "Infinite Fun Space" where Minds go to play...

Comment author: NxGenSentience 25 October 2014 02:13:27PM *  0 points [-]

Thanks, I'll have a look. And just to be clear, watching *The Machine" wasn't driven primarily by prurient interest -- I was drawn in by a reviewer who mentioned that the backstory for the film was a near-future world-wide recession, pitting the West with China, and that intelligent battlefield robots and other devices were the "new arms race" in this scenario.

That, and that the film reviewer mentioned that (i) the robot designer used quantum computing to get his creation to pass the Turing Test (a test I have doubts about as do other researchers, of course, but I was curious how the film would use it) - and (ii) yet the project designer continued to grapple with the question of whether his signature humanoid creation was really conscious, or a "clever imitation", pulled me in.

(He verbally challenges and confronts her/it, in an outburst of frustration, in his lab about this, roughly two thirds of the way through the movie and she verbally parrys plausible responses.)

It's really not all that weak, as film depictions of AI go. It's decent entertainment with enough threads of backstory authenticity, political and philosophical, to tweak one's interest.

My caution, really, was a bit harsh; applying largely to the uncommon rigor of those of us in this group -- mainly to emphasise that the film is entertainment, not a candidate for a paper in the ACM digital archives.

However, indeed, even the use of a female humanoid form makes tactical design sense. If a government could make a chassis that "passed" the visual test and didn't scream "ROBOT" when it walked down the street, it would have much greater scope of tactical application --- covert ops, undercover penetration into terrorist cells, what any CIA clandestine operations officer would be assigned to do.

Making it look like a woman just adds to the "blend into the crowd" potential, and that was the justification hinted at in the film, rather than some kind of sexbot application. "She" was definitely designed to be the most effective weapon they could imagine (a British-funded military project.)

Given that over 55 countries now have battlefield robotic projects under way (according to Kurzweil's weekly newsletter) -- and Google got a big DOD project contract recently, to proceed with advanced development of such mechanical soldiers for the US government -- I thought the movie worth a watch.

If you have 90 minutes of low-priority time to spend (one of those hours when you are mentally too spent to do more first quality work for the day, but not yet ready to go to sleep), you might have a glance.

Thanks for the book references. I read mostly non-fiction, but I know sci fi has come a very long way, since the old days when I read some in high school. A little kindling for the imagination never hurts. Kind regards, Tom ("N.G.S")

Comment author: KatjaGrace 21 October 2014 08:59:39PM 2 points [-]

In order to model intelligence explosion, we need to be able to measure intelligence.

Describe a computer's power as <Memory, FLOPS>. What is the relative intelligence of these 3 computers?

<M, S> <M, 2S> <2M, S>

Perhaps we should talk about something like productivity instead of intelligence, and quantify according to desirable or economically useful products.

Comment author: NxGenSentience 24 October 2014 04:03:38PM *  -2 points [-]

Perhaps we should talk about something like productivity instead of intelligence, and quantify according to desirable or economically useful products.

I am not sure I am very sympathetic with a pattern of thinking that keeps cropping up, viz., as soon as our easy and reflexive intuitions about intelligence become strained, we seem to back down the ladder a notch, and propose just using an economic measure of "success".

Aside from (i) somewhat of a poverty of philosophical of imagination (e.g. what about measuring the intrinsic interestingness of ideas, or creative output of various kinds... or, even, dare I say beauty if these superintellects happen to find that worth doing [footnote 1]), I am skeptical on grounds of (ii): given the phase change in human society likely to accompany superintelligence (or nano, etc.), what kind of economic system is likely to be around, in the 22nd century, the 23rd.... and so on?

Economics, as we usually use the term, seems as dinosaur-like as human death, average IQs of 100, energy availability problems, the nuclear biological human family (already DOA); having offspring by just taking the genetic lottery cards and shuffling... and all the rest of social institutions based on eons of scarcity -- of both material goods, and information.

Economic productivity, or perceived economic value. seems like the last thing we ought to based intelligence metrics on. (Just consider some the economic impact of professional sports -- hardly a measure of meteoric intellectual achievement.)

[Footnote 1]: I have commented in here before about the possibility that "super-intelligences" might exhibit a few surprises for us math-centric, data dashboard-loving, computation-friendly information hounds.

(Aside: I have been one of them, most of my life, so no one should take offense. Starting far back: I was the president of Mu Alpha Theta, my high school math club, in a high school with an advanced special math program track for mathematically gifted students. Later, while a math major at UC Berkeley, I got virtually straight As and never took notes in class; I just went to class each day, sat in the front row, and payed attention. I vividly remember getting the exciting impression, as I was going through the upper division math courses, that there wasn't anything I couldn't model.)

After graduation from UCB, at one point I was proficient in 6 computer languages. So, I do understand the restless bug, the urge to think of a clever data structure and to start coding ... the impression that everything can be coded, with enough creativity .

I also understand what mathematics is, pretty well. For starters, it is a language. A very, very special language with deep connections to the fabric of reality. It has features that make it one of the few, perhaps only candidate language, for being level of description independent. Natural languages, and technical domain-specific languages are tied to corresponding ontologies and to corresponding semantics' that enfold those ontologies. Math is the most omni-ontological, or meta-ontological language we have (not counting brute logic, which is not really a "language", but a sort of language substructure schema.

Back to math. It is powerful, and an incredible tool, and we should be grateful for the "unreasonable success" it has (and continue to try to understand the basis for that!)

But there are legitimate domains of content beyond numbers. Other ways of experiencing the world's (and the mind's) emergent properties. That is something I also understand.

So, gee, thanks to whomever gave me the negative two points. It says more about you than it does about me, because my nerd "street cred" is pretty secure.

I presume the reader "boos" are because I dared to suggest that a superintelligence might be interested in, um, "art", like the conscious robot in the film I mention below, who spends most of its free time seeking out sketch pads, drawing, and asking for music to listen to. Fortunately, I don't take polls before I form viewpoints, and I stand by what I said.)

Now, to continue my footnote: Imagine that you were given virtually unlimited computational ability, imperishable memory, ability to grasp the "deductive closure" of any set of propositions or principles, with no effort, automatically and reflexively.

Imagine also that you have something similar to sentience or autonomy, and can choose your own goals. SUppose also, say, that your curiosity functions in such a way that "challenges" are more "interesting" to you than activities that are always a fait accompli .

What are you going to do? Plug yourself into the net, and act like an asperger spectrum mentality, compulsively computing away at everything that you can think of to compute?

Are you going to find pi to a hundred million digits of precision?

Invert giant matrices just for something to do?

It seems at least logically and rationally possible that you will be attracted to precisely those activities that are not computational givens before you even begin doing them. You might view the others as pointless, because their solution is preordained.

Perhaps you will be intrigued by things like art, painting, or increasingly beautiful virtual reality simulations for the sheer beauty of them.

In case anyone saw the movie "The Machine" on Netflix, it dramatizes this point, which was interesting. It was, admittedly, not a very deep film; one inclined to do so can find the usual flaws, and the plot device of using a beautiful female form could appear to be a concession to the typically male demographic for SciFi films -- until you look a bit deeper at the backstory of the film (that I mention below.)

I found one thing of interest: when the conscious robot was left alone, she always began drawing again, on sketch pads.

And, in one scene wherein the project leader returned to the lab, did he find "her" plugged-into the internet, playing chess with supercomputers around the world? Working on string theory? Compiling statistics about everything that could conceivably be quantified?

No. The scene finds the robot (in the film, it has sensory responsive skin, emotions, sensory apparati etc. based upon ours) alone in a huge warehouse, having put a layer of water on the floor, doing improvisational dance with joyous abandon, naked, on the wet floor, to loud classical music, losiing herself in the joy of physical freedom, sensual movement, music, and the synethesia of music, light, tactility and the experience of "flow".

The explosions of light leaking through her artificial skin, in what presumably were fiber ganglia throughout her-its body, were a demure suggestion of whole body physical joy of movement, perhaps even an analogue of sexuality. (She was designed partly as an em, with a brain scan process based on a female lab assistant.)

The movie is worth watching just for that scene (please -- it is not for viewer eroticism) and what it suggests to those of us who imagine ourselves overseeing artificial sentience design study groups someday. (And yes, the robot was designed to be conscious, by the designer, hence the addition to the basic design, of the "jumpstart" idea of uploading properties of the scanned CNS of human lab assistant.)

I think we ought to keep open our expectations, when we start talking about creating what might (and what I hope will) turn out to be actual minds.

Bostrom himself raises this possibility when he talks about untapped cognitive abilities that might already be available within the human potential mind-space.

I blew a chance to talk at length about this last week. I started writing up a paper, and realized it was more like a potential PhD dissertation topic, than a post. So I didn't get it into usable, postable form. But it is not hard to think about, is it? Lots of us in here already must have been thinking about this. ... continued

Comment author: TheAncientGeek 20 October 2014 11:31:22AM *  0 points [-]

A growing consensus isn't a done deal.

It's a matter if fact that information ontology i isn't the established consensus in the way that evolution is. You are entitled to opinions, but not to pass off opinions as fact. There is enough confusion about physics already.

You bring in the issue of objections to information ontology The unstated argument seems to be that since there are no valid objections, there is nothing to stop it becoming the established consensus, so it is as good as.

What would a universe in which information is not fundamental look like, as opposed to one where it is? I would expect a universe where information is not fundamental to look like one where information always requires some physical, material or energetic, medium or carrier -- a sheet of paper,, radio wave,a train of pulses going down T1 line. That appears to be the case.

I am not sure why you brought Bostrom in. For what it's worth, I don't think a Bostrom style mathematical universe is quite the same as a single universe information ontology.

But avoiding or proscribing the question of whether we have consciousness

I don't know who you think is doing that, .or why you brought it in. Do you think .IO helps with the mind body problem? I think you need to do more than subtract the stuffiness from matter. If we could easily see how a rich conception of consciousness could supervene on pure information, we would easily be able to see how computers could have qualia, which we can't. We need more in our ontology, not less.

Comment author: NxGenSentience 20 October 2014 01:25:22PM *  0 points [-]

If we could easily see how a rich conception of consciousness could supervene on pure information

I have to confess that I might be the one person in this business who never really understood the concept of supervenience -- either "weak supervenience" or "strong supervenience." I've read Chalmers, Dennett, the journals on the concept... never really "snapped-in" for me. So when the term is used, I have to just recuse myself and let those who do understand it, finish their line of thought.

To me, supevenience seems like a fuzzy way to repackage epiphenomenalism, or to finesse some kind of antinomy (for them), like, "can't live with eliminative materialism, can't live with dualism, can't live with type - type identity theory, and token-token identity theory is untestable and difficult even to give logical nec and sufficient conditions for, so... lets have a new word."
So, (my unruly suspicion tells me) let's say mental events (states, processes, whatever) "supervene" on physiological states (events, etc.)

As I say, so far, I have just had to suspend judgement and wonder if some day "supervene" will snap-in and be intuitively penetrable to me. I push all the definitions, and get to the same place --- a "I don't get it" place, but that doesn't mean I believe the concept is itself defective. I just have to suspend judgement (like, for the last 25 years of study or so.)

We need more in our ontology, not less.

I actually believe that, too... but with a unique take: I think we all operate with a logical ontology ... not in the sense of modus ponens, but in the sense that a memory space can be "logical", meaning in this context, detached from physical memory.

Further, the construction of this logical ontology is, I think, partly culturally influenced, partly influenced by the species' sensorium and equipment, party influenced / constructed by something like Jeff Hawkins' prediction-expectation memory model... constructed, bequeathed culturally, and in several additional related, ways that also tune the idealized, logical ontology.

Memetics influences (in conjunction with native -- although changeable -- abilities in those memes' host vectors) the genesis, maintenance, and evolution of this "logical ontology", also. This is feed foward and feed backward. Memetics influences the logical ontology, which crystalizes into additional memetic templates that are kept, tuning further the logical ontology.

Once "established" (and it constantly evolves), this "logical" ontology is the "target" that, over time, a new (say, human, while growing up, growing old) has as the "target" data structures that it creates a virtual, phenomenological analog simulation of, and as the person gains experience, the person's virtual reality simulation of the world converges on something that is in some way consistently isomorphically related to this "logical" idealized ontology.

So (and there is lots of neurology research that drives much of this, though it may all sound rather speculative) for me, there are TWO ontologies, BOTH of them constructed, and those are in addition to the entangled "outside world" quantum substrate, which is by definition inherently both sub-ontological (properly understood) and not sensible, (It is sub-ontological because of its nature, but is interrogatable, giving feedback helping to form boundary conditions for the idealized logical ontology (or ontologies, in different species.)

I'll add that I think the "logical ontology" is also species dependent, unsurprisingly.

I think you and I got off on the wrong foot, maybe you found my tone too declaratory when it should have been phrased more subjunctively. I'll take your point. But since you obviously have a philosophy competence, you will know what the following means:-- one can say my views resemble somewhat an updated quasi-Kantian model, supplemented with the idea that noumena are the inchoate quantum substrate.

Or perhaps to correct that, in my model there are two "noumenal" realms: one is the "logical ontology" I referred to, a logical data structure, and the other is the one below that, and below ALL ontologies, which is the quantum substrate, necessarily "subontological."

But my theory (there is more than I have just shot through quickly right now) handles species-relative qualia and the species-relative logical ontologies across species.

Remaining issues include : how qualia are generated. And the same question for the sense of self. I have ideas how to solve these, and the indexical 1st person problem, connected with the basis problem. Neurology studies of default mode network behavior and architecture, its malfunction, and metacognition, epilepsy, etc, help a lot.

Think this is speculative? You should read neruologists these days, especially the better, data driven ones. (Perhaps you already know, and you will thus see where I derive some of my supporting research.)

Anyway, always, always, I am trying to solve all this in the general case--- first, across biological conscious species (a bird has a different "logical" ontology than people, as well as a different phenomenological reality that, to varying degrees of precision, "represents" or maps to, or has a recurrent resonance with that species' logical ontology) -- and then trying to solve it for any general mind in mind space. that has to live in this universe.

It all sounds like hand waving, perhaps. But this is scarcely an abstract. There are many puzzle pieces to the theory, and every piece of it has lots of specific research. It all is progressively falling together into an integrated system. I need geffen graphs, white boards, to explain it, since its a whole theory, so I can't squeeze it into one post. Besides, this is Bostrom's show.

I'll write my own book when the time comes -- not saying it is right, but it is a promising effort so far, and it seems to work better, the farther I push it.

When it is far enough along, I can test it on a vlog, and see if people can find problems. If so, I will revise, backtrack, and try again. I intend to spend the rest of my life doing this, so discovered errors are just part of revision and refinement.

But first I have to finish, then present it methodically and carefully, so it can be evaluated by others. No space here for that.

Thanks for your previous thoughts, and your caution against sounding too certain. I am really NOT that certain, of course, of anything. I was just thinking out loud, as they say.

this week is pretty much closed..... cheers...

Comment author: RobbBB 15 October 2014 11:15:13PM *  7 points [-]

Present-day humanity is a collective intelligence that is clearly 'superintelligent' relative to individual humans; yet Bostrom expresses little to no interest in this power disparity, and he clearly doesn't think his book is about the 2014 human race.

So I think his definitions of 'superintelligence' are rough, and Bostrom is primarily interested in the invincible inhuman singleton scenario: the possibility of humans building something other than humanity itself that can vastly outperform the entire human race in arbitrary tasks. He's also mainly interested in sudden, short-term singletons (the prototype being seed AI). Things like AGI and ems mainly interest him because they might produce an invincible singleton of that sort.

Wal-Mart and South Korea have a lot more generality and optimization power than any living human, but they're not likely to become invincibly superior to rival collectives anytime soon, in the manner of a paperclipper, and they're also unlikely to explosively self-improve. That matters more to Bostrom than whether they technically get defined as 'superintelligences'. I get the impression Bostrom ignores that kind of optimizer more because it doesn't fit his prototype, and because the short-term risks and benefits prima facie seem much smaller, than because of any detailed analysis of the long-term effects of power-acquiring networks.

It's important (from Bostrom's perspective) that the invincible singleton scenario is defined relative to humans at the time it's invented; if we build an AGI in 2100 that's superintelligent relative to 2014 humans, but stupid relative to 2100 humans, then Bostrom doesn't particularly care (unless that technology might lead to an AI that's superintelligent relative to its contemporaries).

It's also important for invincible singleton, at least in terms of selecting a prototype case, that it's some optimizer extrinsic to humanity (or, in the case of ems and biologically super-enhanced humans -- which I get the impression are edge cases in Bostrom's conceptual scheme -- the optimizer is at least extrinsic to some privileged subset of humanity). That's why it's outside the scope of the book Superintelligence to devote a lot of time to the risks of mundane totalitarianism, the promise of a world government, or the general class of cases where humanity just keeps gradually improving in intelligence but without any (intragenerational) conflicts or values clashes. Even though it's hard to define 'superintelligence' in a way that excludes governments, corporations, humanity-as-a-whole, etc.

(I get the vague feeling in Superintelligence that Bostrom finds 'merely human' collective superintelligence relatively boring, except in so far as it affects the likely invincible inhuman singleton scenarios. It's not obvious to me that Hansonian em-world scenarios deserve multiple chapters while 'Networks and organizations' deserve a fairly dismissive page-and-a-half mention; but if you're interested in invincible singletons extrinsic to humanity, and especially in near-term AI pathways to such, it makes sense to see ems as more strategically relevant.)

Bostrom's secondary interest is the effects of enhancing humans' / machines' / institutions' general problem-solving abilities relative to ~2014 levels. So he does discuss things other than invincible singletons, and he does care about how human intelligence will change relative to today (much more so than he cares about superintelligence relative to, say, 900 BC). But I don't think this is the main focus.

Comment author: NxGenSentience 20 October 2014 11:23:12AM 0 points [-]

Thanks for the very nice post.

Comment author: NxGenSentience 20 October 2014 11:14:23AM *  0 points [-]

Three types of information in the brain (and perhaps other platforms), and (coming soon) why we should care

Before I make some remarks, I would recommend Leonard Susskind’s (for those who don’t know him already – though most folks in here probably do -- he is a physicist at the Stanford Institute for Theoretical Physics) very accessible 55 min YouTube presentation called “The World as Hologram.” It is not as corny as it might sound, but is a lecture on the indestructibility of information, black holes (which is a convenient lodestone for him to discuss the physics of information and his debate with Hawking), types of information, and so on. He makes the seemingly point that, “…when one rules out the impossible, then what is left, however improbable, is the best candidate for truth.”
One interesting side point that comes out is his take on why computers that are more powerful have to shed more “heat”. Here is the talk: http://youtu.be/2DIl3Hfh9tY

Okay, my own remarks. One of my two or three favorite ways to “bring people in” to the mind-body problem, is with some of the ideas I am now presenting. This will be in skeleton form tonight and I will come back and flesh it out more in coming days. (I promised last night to get something up tonight on this topic, and in case anyone cares and came back, I didn’t want to have nothing. I actually have a large piece of theory I am building around some of this, but for now, just the three kinds of information, in abbreviated form.

Type One information is the sort dealt with, referred to, and treated in thermodynamics and entropy discussions. This is dealt with analytically in Newton’s Second Law of Thermodynamics. Here is one small start, but most will know it: en.wikipedia.org/wiki/Secondlawof_thermodynamics

Heat, energy, information, the changing logical positions within state spaces of entities or systems of entities, all belong to what I am calling category one information in the brain. We can also call this “physical” information. The brain is pumped -- not closed -- with physical information, and emits physical information as well.

Note that there is no semantic, referential, externally cashed-out content, defined for physical, thermodynamic information, qua physical information. It is - though possibly thermodynamically open an otherwise closed universe of discourse, needing nothing logically or ontologically external to analytically characterize it.

Type Two information in the brain (please assign no significance to my ordering, just yet) is functional. It is a carrier, or mediator, of causal properties, in functionally larger physical ensembles, like canonical brain processes. The “information” I direct attention to here must be consistent with (i.e. not violate principles of) Category One informational flow, phase space transitions, etc., in the context of the system, but we cannot derive Category Two information content (causal loop xyz doing pqr) from dynamical Category One data descriptions themselves.

In particular, imagine that we deny the previous proposition. We would need either an isomorphism from Cat One to Cat Two, or at least an “onto” function from Cat One to Cat Two (hope I wrote that right, it’s late.) Clearly, Cat one configurations to Cat Two configurations are many-many, not isomorphic, nor many to one. (And one to many transformations from cat one sets to cat two sets, would be intuitively unsatisfactory if we were trying to build an “identity” or transform to derive C2 specifics, from C1 specifics .

It would resemble replacing type-type identity with token-token identity, jettisoning both sides of the Leibniz Law bi-conditional (“Identity of indiscernibles” and “Indiscernibility of Identicals” --- applied with suitable limits so as not to sneak anything in by misusing sortal ranges of predicates or making category errors in the predications.)

Well, this is a stub, and because of my sketchy presentation, this might be getting opaque, so let me move on to the next information type, just to get all three out.

Type Three information, is semantic, or intentional content, information. If I am visualizing very vibrantly a theta symbol, the intentional content of my mental state is the theta symbol on whatever background I visualize it against. A physical state of, canonically, Type Two information – which is a candidate, in a particular case, to be the substrate-instantiation or substrate-realization of this bundle of Type Three information (probably at least three areas of my brain, frequency coupled and phase offset locked, until a break in my concentration occurs) is also occuring.

A liberal and loose way of describing Type Three info (that will raise some eyebrows because it has baggage, so I use it only under duress: temporary poverty of time and the late hour, to help make the notion easy to spot) is that a Type Three information instance is a “representation” of some element, concept, or sensible experience of the “perceived” ontology (of necessity, a virtual, constructed ontology, in fact, but for this sentence, I take no position about the status of this “perceived”, ostensible virtual object or state of affairs.)

The key idea I would like to encourage people to think about is whether the three categories of information are (a) legitimate categories, and mainly (b) whether they are collapsible, inter-translatable, or are just convenient shorthand level-of-description changes. I hope the reader will see, on the contrary, that one or more of them are NOT reducible to a lower one, and that this has lessons about mind-substrate relationships that point out necessary conceptual revisions—and also opportunities for theoretical progress.

It seems to me that reducing Cat Two to Cat One is problematic, and reducing Cat 3 to Cat 2 is problematic, given the usual standards of “identity” used in logic (e.g. i. Leibniz Law; ii. modal logic’s notions of identity across possible worlds, and so on.)

Okay, I need to clean this up. It is just a stub. Those interested should come back and see it better written, and expanded to include replies to what I know are expected objections, questions, etc., C2 and C3 probably sound like the "same old thing" the m-b problem about experience vs neural correlate. Not quite. I am trying to get at something additional, here. Hard without diagrams.

Also, I have to present much of this without any context… like presenting a randomly selected lecture from some course, without building up the foundational layers. (That is why I am putting together a YouTube channel of my own, to go from scratch, to something like this, after about 6 hours of presentation… then on to a theory of which this is one puzzle piece.

Of course, we are here to discuss Bostrom’s ideas, but this “three information type” idea, less clumsily expressed, does tie straightforwardly to the question of indirect reach, and “kinds of better” that different superintelligences can embrace.

Unfortunately I will have to establish that conceptual link when I come back and clean this up, since it is getting so late. Thanks to those who read this far...

Comment author: TheAncientGeek 19 October 2014 12:45:25PM *  0 points [-]

No, information ontology isn't a done deal.

Comment author: NxGenSentience 20 October 2014 08:36:09AM *  0 points [-]

Well, I ran several topics together in the same post, and that was perhaps careless planning. And, in any case I do not expect slavish agreement just because I make the claim.

And, neither should you, just by flatly denying it, with nary a word to clue me in about your reservations about what has, in the last 10 years, transitioned from a convenient metaphor in quantum physics, cosmology, and other disciplines, to a growing consensus about the actual truth of things. (Objections to this growing consensus, when they actually are made, seem to be mostly arguments from guffaw, resembling the famous "I refute you thus" joke about Berkeleyan idealism.)

By the way, I am not defending Berkeleyan idealism, still less the theistic underpinning that kept popping up in his thought (I am an atheist.)

Rather, as for most thinkers, who cite the famous joke about someone kicking a solid object as a "proof" that Berkeley's virtual phenomenalism was self-evidently foolish, the point of my usage of that joke is to show it misses the point. Of course it seems phenomenologically, like the world is made of "stuff".

And information doesn't seem to be "real stuff." (The earth seems flat, too. So what?)

Had we time, you and I could debate the relative merits of an information-based, scientifically literate metaphysics, with whatever alternate notion of reality you subscribe to in its place, as your scientifically literate metaphysics.

But make no mistake, everyone subscribes to some kind of metaphysics, just as everyone has a working ontology -- or candidate, provisional set of ontologies.

Even the most "anti-metaphysical" theorists are operating from a (perhaps unacknowledged) metaphysics and working ontology; it is just that they think theirs, because it is invisible to them, is beyond need of conceptual excavation and clarification, and beyond the reach of critical, rational examination -- whereas other people's metaphysics is acutally a metaphysics (argh), and thus carries an elevated burden of proof relative to their ontology.

I am not saying you are like this, of course. I don't know your views. As I say, it could be the subject of a whole forum like this one. So I'll end by saying disagreement is inevitable, especially when I just drop in a remark as I did, about a topic that is actually somewhat tangential (though, as I will try to argue as the forum proceeds, not all that tangential.)

Yes, Bostrom explicitly says he is not concerned with the metaphysics of mind, in his book. Good for him. It's his book, and he can write it any way he chooses.

And I understand his editorial choice. He is trained as a philosopher, and knows as well as anyone that there are probably millions of pages written about the mind body problem, with more added daily. It is easy to understand his decision to avoid getting stuck in the quicksand of arguing specifics about consciousness, how it can be physically realized.

This book obviously has a different mission. I have written for publication before, and I know one has to make strategic choices (with one's agent and editor.)

Likewise, his book is also not about "object-level" work in AI -- how to make it, achieve it, give it this or that form, give it "real mental states", emotion, drives. Those of us trying to understand how to achieve those things, still have much to learn from Bostrom's current book, but will not find intricate conceptual investigations of what will lead to the new science of sentience design.

Still, I would have preferred if he had found a way to "stipulate" Conscious AI, along with speed AI, Quality AI, etc, as one of the flavors that might arise. Then we could address quesions under 4 headings, 4 possible AI worlds (not necessarily mutually exclusive, just as the three from this week are not mutually exclusive.)

The question of the "direct reach" of conscious AI, compared to the others, would have been very interesting.

It is a meta-level book about AI, deliberately ambiguous about consciousness. I think that makes the discussion harder, in many areas.

I like Bostrom. I've been reading his papers for 10 or 15 years.

But avoiding or proscribing the question of whether we have consciousness AND intelligence (vs simply intelligent behavior sans consciousness) thus pruning away, preemptively, issues that could depend on: whether they interact; whether the former increases causal powers -- or instability or stability -- in the exercise of the latter; and so on, keeps lots of questions inherently ambiguous.

I'll try to make good on that last claim, one way or another, during the next couple of weekly sessions.

Comment author: SteveG 15 October 2014 02:19:34AM 2 points [-]

A critical question with Neurons is how to account for the amount of internal state they contain.

A cell can be in a huge number of internal states. Simulating a single cell in a satisfactory way will be impossible for many years. What portion of this detail matters to cognition, however? If we have to consider every time a gene is expressed or protein gets phosphorylated as an information processing event, an awful lot of data processing is going on within neurons, and very quickly.

It appears that vastly simplifying all of this detail in simulation may work out pretty well-but there is a big argument between Markram and IBM's neuromorphic people about this issue.

We really need to delve deep on this and get all of the latest thinking in one place.

Comment author: NxGenSentience 19 October 2014 08:08:39AM *  0 points [-]

A cell can be in a huge number of internal states. Simulating a single cell in a satisfactory way will be impossible for many years. What portion of this detail matters to cognition, however? If we have to consider every time a gene is expressed or protein gets phosphorylated as an information processing event, an awful lot of data processing is going on within neurons, and very quickly.

I agree not only with this sentence, but with this entire post. Which of the many, many degrees of freedom of a neuron, are "housekeeping" and don't contribute to "information management and processing" (quotes mine, not SteveG's) is far from obvious, and it seems likely to me that, even with a liberal allocation of the total degrees of freedom of a neuron to some sub-partitiioned equivalence class of "mere" (see following remarks for my reason for quotes) housekeeping, there are likely to be many, many remaining nodes in the directed graph of that neuron's phase space that participate in the instantiation and evolution of an informational state of the sort we are interested in (non-housekeeping).

And, this is not even to mention adjacent neuroglia, etc, that are in that neuron's total phase space, actively participating in the relevant (more than substrate-maintenance) set of causal loops -- as I argued in my post that WBE is not well-defined, a while back.

Back to what SteveG said about the currently unknown level of detail that matters (to the kind of information processing we are concerned with ... more later about this very, very important point); for now: we must not be too temporally-centric, i.e. thinking that the dynamically evolving information processing topology that a neuron makes relevant contributions to, is bounded, temporally, with a window beginning with: dendritic and membrane level "inputs" (receptor occupation, prevailing ionic environment, etc), and ending with: one depolarization -- exocytosis and/or the reuptake and clean-up shortly thereafter.

The gene expression-suppression and the protein turnover within that neuron should, arguably, also be thought of as part of the total information processing action of the cell... leaving this out is not describing the information processing act completely. Rather, it is arbitrarily cutting off our "observation" right before and after a particular depolarization and its immediate sequelae.

The internal modifications of genes and proteins that are going to effect future, information processing (no less than training of ANNs effects future behavior of of the ANN witin that ANNs information ecology) should be thought of, perhaps, as a persistent type of data structure itself. LTP of the whole ecology of the brain may occur on many levels beyond canonical synaptic remodeling.

We don't know yet which ones we can ignore -- e ven after agreeing on some others that are likely substrate maintenance only.

Another way of putting this or an entwined issue is: What are the temporal bounds of an information processing "act"? In a typical Harvard architecture substrate design, natural candidates would be, say, the time window of a changed PSW (processor status word), or PC pointer, etc.
But at a different level of description, it could be the updating of a Dynaset, a concluded SIMD instruction on a memory block representing a video frame, or anything in between.

It depends, ie, on both the "application" and aspects of platform archiceture.

I think it productive, at least, to stretch our horizons a bit (not least because of the time dilation of artificial systems relative to biological ones -- but again, this very statement itself has unexamined assumptions about the window -- spatial and temporal -- of a processed / processable information "packet" in both systems, bio and synthetic) and remain open about assumptions about what must be actively and isomorphically simulated, and what may be treated like "sparse brain" at any given moment.

I have more to say about this, but it fans out into several issues that I should put in multiple posts.

One collection of issues deals with: is "intelligence" a process (or processes) actively in play; is it a capacity to spawn effective, active processes; is it a state of being, like occurrently knowing occupying a subject's specious present, like one of Whitehead's "occasions of experience?"

Should we get right down to, and at last stop finessing around the elephant in the room: the question of whether consciousness is relevant to intelligence , and if so, when should we head-on start looking aggressively and rigorously at retiring the Turing Test, and supplanting it with one that enfolds consciousness and intelligence together, in their proper ratio? (This ratio is to be determined, of course, since we haven't even allowed ourselves to formally address the issue with both our eyes -- intelligenge and consciousness --open. Maybe looking through both issues, confers insight -- like depth vision, to push the metaphor of using two eyes. )

Look, if interested, for my post late tomorrow, Sunday, about the three types of information (at least) in the brain. I will title it as such, for anyone looking for it.

Personally, I think this week is the best thus far, in its parity with my own interests and ongoing research topics. Especially the 4 "For In-depth Ideas" points at the top, posted by Katja. All 4 are exactly what I am most interested in, and working most actively on. But of course that is just me; everyone will have their own favorites.

It is my personal agony (to be melodramatic about it) that I had some external distractions this week, so I am getting a late start on what might have been my best week.

But I will add what I can, Sunday evening (at least about the three types of information, and hopefully other posts. I will come back here even after the "kinetics" topic begins, so those persons in here who are interested in Katja's 4 In-depth issues, might wish to look back here later next week, as well as Sunday night or Monday morning, if you are interested in those issues as much as I am.

I am also an enthusiast for plumbing the depths of the quality idea, as well as, again, point number one on Katja's "In-depth Research" idea list for this week, which is essentially the issue of whether we can replace the Turing Test with -- now my own characterization follows, not Katja's, so "blame me" (or applaud if you agree) -- something much more satisfactory, with updated conceptual nuance representative of cognitive sciences and progressive AI as they are (esp the former) in 2015, not 1950.

By that I refer to theories, less preemptively suffocated by the legacy of logical positivism, which has been abandoned in the study of cognition and consciousness by mainstream cognitive science researchers; physicists doing competent research on consciousness; neuroscience and physics-literate philosophers; and even "hard-nosed" neurologists (both clinical and theoretical) who are doing down and detailed, bench level neuroscience.

As an aside, a brief look around confers the impression that some people on this web site still seem to think that being "critical thinkers" is somehow to be identified with holding (albeit perhaps semi-consciously) the scientific ontology of the 19th century, and subscribing to philosophy-of-science of the 1950's.

Here's the news, for those folks: the universe is made of information, not Rutherford-style atoms, or particles obeying Newtonian mechanics. Ask a physicist: naive realism is dead. So are many brands of hard "materialism" in philosophy and cognitive science.

Living in the 50's is not being "critical", is is being uninformed. Admitting that consciousness exists, and trying to ferret out its function, is not new-agey, it is realistic. Accepting reality is pretty much a necessary condition of being "less wrong."

And I think it ought to be one of the core tasks we never stray too far from, in our study of, and our pursuit of the creation of, HLAI (and above.)

Okay, late Saturday evening, and I was loosening my tie a bit... and, well, now I'll to get back to what contemporary bench-science neurologists have to say, to shock some of us (it surprised me) out of our default "obvious* paradigms, even our ideas about what the cortex does.

I'll try to post a link or two in the next day or two, to illustrate the latter. I recently read one by neurologists (research and clinical) who study children born en-cephalic (basically, just a spinal column and medulla, with an empty cavity full of CS fluid, in the rest of their cranium.) You won't believe what the team in this one paper presents, about consciousness in these kids. Large database of patients over years of study. And these neurologists are at the top of their game. It will have you rethinking some ideas we all thought were obvious, about what the cortex does. But let me introduce that paper properly, when I post the link, in a future message.

Before that, I want to talk about the three kinds of information in the brain -- maybe two, maybe 4, but important categorical differences (thermodynamic vs. semantic-referential, for starters), and what it means to those of us interested in minds and their platform-independent substrates, etc. I'll try to have something about that up, here, Sunday night sometime.

View more: Prev | Next