Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

In response to Meta-rationality
Comment author: Tuukka_Virtaperko 06 November 2013 02:33:23AM 2 points [-]

This whole conversation seems a little awkward now.

Comment author: Tuukka_Virtaperko 11 November 2013 07:18:44PM 1 point [-]

I apologize. I no longer feel a need to behave in the way I did.

In response to Meta-rationality
Comment author: Tuukka_Virtaperko 06 November 2013 02:33:23AM 2 points [-]

This whole conversation seems a little awkward now.

Comment author: Risto_Saarelma 15 February 2013 07:16:46PM 1 point [-]

I didn't intend it as much of an ad hominem, after all both groups in the comparison are so far quite unprepared for the undertaking they're trying to do. Just trying to find ways to try to describe the cultural mismatch that seems to be going on here.

I understand that math is starting to have some stuff dealing with how to make good maps from a territory. Only that's inside the difficult and technical stuff like Jaynes' Probability Theory or Pearl's Causality, instead of somebody just making a nice new logical calculus with an operator for doing induction. There's already some actual philosophically interesting results like an inductive learner needing to have innate biases to be able to learn anything.

Comment author: Tuukka_Virtaperko 22 February 2013 09:33:03AM *  0 points [-]

That's a good result. However, the necessity of innate biases undermines the notion of rationality, unless we have a system for differentiating the rational cognitive faculty from the innately biased cognitive faculty. I am proposing that this differentiation faculty be rational, hence "Metarationality".

In the Cartesian coordinate system I devised object-level entities are projected as vectors. Vectors with a positive Y coordinate are rational. The only defined operation so far is addition: vectors can be added to each other. In this metasystem we are able to combine object-level entities (events, objects, "things") by adding them to each other as vectors. This system can be used to examine individual object-level entities within the context other entities create by virtue of their existence. Because the coordinate system assigns a moral value to each entity it can express, it can be used for decision making. Obviously, it values morally good decisions over morally bad ones.

Every entity in my system is an ordered pair of the form . Here x and y are propositional variables whose truth values can be -1 (false) or 1 (true). x denotes whether the entity is tangible and y whether it is placed within a rational epistemology. p is the entity. &p is the conceptual part of the entity (a philosopher would call that an "intension"). *p is the sensory part of the entity, ie. what sensory input is considered to be the referent of the entity's conceptual part. A philosopher would call *p an extension. a, b and c are numerical values, which denote the value of the entity itself, of its intension, and of its extension, respectively.

The right side of the following formula (right to the equivalence operator) tells how b and c are used to calculate a. The left side of the formula tells how any entity is converted to vector a. The vector conversion allows both innate cognitive bias and object-level rationality to influence decision making within the same metasystem.

If someone says that it's just a hypothesis this model works, I agree! But I'm eager to test it. However, this would require some teamwork.

In response to Meta-rationality
Comment author: Risto_Saarelma 26 October 2012 06:53:14AM 3 points [-]

A bit late to this, but I think I figured out what the basic problem here is: Robert Pirsig is an archer, while LW (and folk like Judea Pearl, Gary Drescher and Marcus Hutter) are building hot-air balloons. And we're talking about doing a Moon shot, building an artificial general intelligence, here.

Archers think that if they get their bowyery really good and train to shoot really, really well, they might eventually land an arrow on the Moon. Maybe they'll need to build some kind of ballista type thing that needs five people to draw, but archery is awesome at skewering all sorts of things, so it should definitely be the way to go.

Hot-air balloonists on the other hand are pretty sure bows and arrows aren't the way to go, despite balloons being a pretty recent invention while archery has been practiced for millennia and has a very distinguished pedigree of masters. Balloons seem to get you higher up than you can get things to go with any sort of throwing device, even one of those fancy newfangled trebuchet things. Sure, nobody has managed to land a balloon on the Moon either, despite decades of trying, so obviously we're still missing something important that nobody really has a good idea about.

But it does look like figuring out how stuff like balloons work and trying to think of something new along similar lines, instead of developing a really good archery style is the way to go if you want to actually land something on the Moon at some point.

Comment author: Tuukka_Virtaperko 15 February 2013 09:33:58AM -1 points [-]

In any case, this "hot-air balloonist vs. archer" (POP!) comparison seems like some sort of an argument ad hominem -type fallacy, and that's why I reacted with an ad hominem attack about legos and stuff. First of all, ad hominem is a fallacy, and does nothing to undermine my case. It does undermine the notion that you are being rational.

Secondly, if my person is that interesting, I'd say I resemble the mathematician C. S. Peirce more than Ramakrishna. It seems to me mathematics is not necessarily considered completely acceptable by the notion of rationality you are advocating, as pure mathematics is only concerned about rules regarding what you'd call "maps" but not rules regarding what you'd call "territory". That's a weird problem, though.

Comment author: Risto_Saarelma 09 January 2013 03:36:05AM 0 points [-]

Would you find a space rocket to resemble either a balloon or an arrow, but not both?

The point is that the people who build it will resemble balloon-builders, not archers. People who are obsessed with getting machines to do things, not people who are obsessed with human performance.

Comment author: Tuukka_Virtaperko 29 January 2013 05:10:53PM -1 points [-]

My work is a type theory for AI for conceptualizing the input it receives via its artificial senses. If it weren't, I would have never come here.

The conceptualization faculty is accompanied with a formula for making moral evaluations, which is the basis of advanced decision making. Whatever the AI can conceptualize, it can also project as a vector on a Cartesian plane. The direction and magnitude of that vector are the data used in this decision making.

The actual decision making algorithm may begin by making random decisions and filtering good decisions from bad with the mathematical model I developed. Based on this filtering, the AI would begin to develop a self-modifying heuristic algorithm for making good decisions and, in general, for behaving in a good manner. What the AI would perceive as good behavior would of course, to some extent, depend of the environment in which the AI is placed.

If you had an AI making random actions and changing it's behavior according to heuristic rules, it could learn things in a similar way as a baby learns things. If you're not interested of that, I don't know what you're interested of.

I didn't come here to talk about some philosophy. I know you're not interested of that. I've done the math, but not the algorithm, because I'm not much of a coder. If you don't want to code a program that implements my mathematical model, that's no reason to give me -54 karma.

In response to Meta-rationality
Comment author: Risto_Saarelma 26 October 2012 06:53:14AM 3 points [-]

A bit late to this, but I think I figured out what the basic problem here is: Robert Pirsig is an archer, while LW (and folk like Judea Pearl, Gary Drescher and Marcus Hutter) are building hot-air balloons. And we're talking about doing a Moon shot, building an artificial general intelligence, here.

Archers think that if they get their bowyery really good and train to shoot really, really well, they might eventually land an arrow on the Moon. Maybe they'll need to build some kind of ballista type thing that needs five people to draw, but archery is awesome at skewering all sorts of things, so it should definitely be the way to go.

Hot-air balloonists on the other hand are pretty sure bows and arrows aren't the way to go, despite balloons being a pretty recent invention while archery has been practiced for millennia and has a very distinguished pedigree of masters. Balloons seem to get you higher up than you can get things to go with any sort of throwing device, even one of those fancy newfangled trebuchet things. Sure, nobody has managed to land a balloon on the Moon either, despite decades of trying, so obviously we're still missing something important that nobody really has a good idea about.

But it does look like figuring out how stuff like balloons work and trying to think of something new along similar lines, instead of developing a really good archery style is the way to go if you want to actually land something on the Moon at some point.

Comment author: Tuukka_Virtaperko 08 January 2013 12:48:22PM *  0 points [-]

Would you find a space rocket to resemble either a balloon or an arrow, but not both?

I didn't imply something Pirsig wrote would, in and of itself, have much to do with artificial intelligence.

LessWrong is like a sieve, that only collects stuff that looks like I need it, but on a closer look I don't. You won't come until the table is already set. Fine.

In response to Meta-rationality
Comment author: Tuukka_Virtaperko 11 October 2012 08:17:07AM *  -2 points [-]

I can't reply to some of the comments, because they are below the threshold. Replies to downvoted comments are apparently "discouraged" but not banned, and I'm not on LW for any other reason than this, so let's give it a shot. I don't suppose I am simply required to not reply to a critical post about my own work.

First of all, thanks for the replies, and I no longer feel bad for the about -35 "karma" points I received. I could have tried to write some sort of a general introduction to you, but I've attempted to write them earlier, and I've found dialogue to be a better way. The book I wrote is a general introduction, but it's 140 pages long. Furthermore, my published wouldn't want me to give it away for free, and the style isn't very fitting to LessWrong. I'd perhaps hape to write another book and publish it for free as a series of LessWrong articles.

Mitchell_Porter said:

Tuukka's system looks like a case study in how a handful of potentially valid insights can be buried under a structure made of wordplay (multiple uses of "irrational"); networks of concepts in which formal structures are artificially repeated but the actual relations between concepts are fatally vague (his big flowchart); and a severe misuse of mathematical objects and propositions in an attempt to be rigorous.

The contents of the normative and objective continua are relatively easily processed by an average LW user. The objective continuum consists of dialectic (classical quality) about sensory input. Sensory input is categorized as it is categorized in Maslow's hierarchy of needs. I know there is some criticism of Maslow's theory, but can be accept it as a starting point? "Lower needs" includes homeostasis, eating, sex, excretion and such. "Higher needs" includes reputation, respect, intimacy and such. "Deliberation" includes Maslow's "self-actuation", that is, problem solving, creativity, learning and such. Sense-data is not included in Maslow's theory, but it could be assumed that humans have a need to have sensory experiences, and that this need is so easy to satisfy that it did not occur to Maslow to include it as the lowest need of his hierarchy.

The normative continuum is similarily split to a dialectic portion and a "sensory" portion. That is to say, a central thesis of the work is that there are some kind of mathematical intuitions that are not language, but that are used to operate in the domain of pure math and logic. In order to demonstrate that "mathematical intuitions" really do exist, let us consider the case of a synesthetic savant, who is able to evaluate numbers according to how they "feel", and use this feeling to determine whether the number is a prime. The "feeling" is sense-data, but the correlation between the feeling and primality is some other kind of non-lingual intuition.

If synesthetic primality checks exist, it follows that mathematical ability is not entirely based on language. Synesthetic primality checks do exist for some people, and not for others. However, I believe we all experience mathematical intuitions - for most, the experiences are just not as clear as they are for synesthetic savants. If the existence of mathematical intuition is denied, synesthetic primality checks are claimed impossible due to mere metaphysical skepticism in spite of lots of evidence that they do exist and produce strikingly accurate results.

Does this make sense? If so, I can continue.

Mitchell_Porter also said:

Occasionally you get someone who constructs their system in the awareness that it's a product of their own mind and not just an objective depiction of the facts as they were found

I'm aware of that. Objectivity is just one continuum in the theory.

Having written his sequel to Pirsig he now needs to outgrow that act as soon as possible, and acquire some genuine expertise in an intersubjectively recognized domain, so that he has people to talk with and not just talk at.

I'm not exactly in trouble. I have a publisher and I have people to talk with. I can talk with a mathematician I know and on LilaSquad. But given that Pirsig's legacy appears to be continental philosophy, nobody on LilaSquad can help me improve the formal approach even though some are interested of it. I can talk about everything else with them. Likewise, the mathematician is only interested of the formal structure of the theory and perhaps slightly of the normative continuum, but not of anything else. I wouldn't say I have something to prove or that I need something in particular. I'm mostly just interested to find out how you will react to this.

What I was picking up on in Tuukka's statement was that the irrationals are uncountable whereas the rationals are countable. So the rationals have the cardinality of a set of discrete combinatorial structures, like possible sentences in a language, whereas the irrationals have the cardinality of a true continuum, like a set of possible experiences, if you imagined qualia to be genuinely real-valued properties and e.g. the visual field to be a manifold in the topological sense. It would be a way of saying "descriptions are countable in number, experiences are uncountable".

Something to that effect. This is another reason why I like talking with people. They express things I've thought about with a different wording. I could never make progress just stuck in my head.

I'd say the irrational continua do not have fixed notions of truth and falsehood. If something is "true" now, there is no guarantee it will persist as a rule in the future. There are no proof methods of methods of justification. In a sense, the notions of truth and falsehood are so distorted in the irrational continua that they hardly qualify as truth or falsehood - even if the Bible, operating in the subjective continuum, would proclaim that it's "the truth" that Jesus is the Christ.

Mitchell asked:

Incidentally, would I be correct in guessing that Robert Pirsig never replied to you?

As far as I know, the letter was never delivered to Pirsig. The insiders of MoQ-Discuss said their mailing list is strictly for discussing Pirsig's thoughts, not any derivative work. The only active member of Lila Squad who I presume to have Pirsig's e-mail address said Pirsig doesn't understand the Metaphysics of Quality himself anymore. It seemed pointless to press the issue that the letter be delivered to him. When the book is out, I can that to him via his publisher and hope he'll receive it. The letter wasn't even very good - the book is better.

I thought Pirsig might want to help me with development of the theory, but it turned out I didn't require his help. Now I only hope he'll enjoy reading the book.

Comment author: Tuukka_Virtaperko 15 February 2012 08:35:33PM 1 point [-]

Okay. In this case, the article does seem to begin to make sense. Its connection to the problem of induction is perhaps rather thin. The idea of using low Kolmogorov complexity as justification for an inductive argument cannot be deduced as a theorem of something that's "surely true", whatever that might mean. And if it were taken as an axiom, philosophers would say: "That's not an axiom. That's the conclusion of an inductive argument you made! You are begging the question!"

However, it seems like advancements in computation theory have made people able to do at least remotely practical stuff on areas, that bear resemblance to more inert philosophical ponderings. That's good, and this article might even be used as justification for my theory RP - given that the use of Kolmogorov complexity is accepted. I was not familiar with the concept of Kolmogorov complexity despite having heard of it a few times, but my intuitive goal was to minimize the theory's Kolmogorov complexity by removing arbitrary declarations and favoring symmetry.

I would say, that there are many ways of solving the problem of induction. Whether a theory is a solution to the problem of induction depends on whether it covers the entire scope of the problem. I would say this article covers half of the scope. The rest is not covered, to my knowledge, by anyone else than Robert Pirsig and experts of Buddhism, but these writings are very difficult to approach analytically. Regrettably, I am still unable to publish the relativizability article, which is intended to succeed in the analytic approach.

In any case, even though the widely rejected "statistical relevance" and this "Kolmogorov complexity relevance" share the same flaw, if presented as an explanation of inductive justification, the approach is interesting. Perhaps, even, this paper should be titled: "A Formalization of Occam's Razor Principle". Because that's what it surely seems to be. And I think it's actually an achievement to formalize that principle - an achievement more than sufficient to justify the writing of the article.

Comment author: Tuukka_Virtaperko 15 February 2012 09:31:54PM *  0 points [-]

Commenting the article:

"When artificial intelligence researchers attempted to capture everyday statements of inference using classical logic they began to realize this was a difficult if not impossible task."

I hope nobody's doing this anymore. It's obviously impossible. "Everyday statements of inference", whatever that might mean, are not exclusively statements of first-order logic, because Russell's paradox is simple enough to be formulated by talking about barbers. The liar paradox is also expressible with simple, practical language.

Wait a second. Wikipedia already knows this stuff is a formalization of Occam's razor. One article seems to attribute the formalization of that principle to Solomonoff, another one to Hutter. In addition, Solomonoff induction, that is essential for both, is not computable. Ugh. So Hutter and Rathmanner actually have the nerve to begin that article by talking about the problem of induction, when the goal is obviously to introduce concepts of computation theory? And they are already familiar with Occam's razor, and aware of it having, at least probably, been formalized?

Okay then, but this doesn't solve the problem of induction. They have not even formalized the problem of induction in a way that accounts for the logical structure of inductive inference, and leaves room for various relevance operators to take place. Nobody else has done that either, though. I should get back to this later.

Comment author: Risto_Saarelma 11 February 2012 04:56:44PM 1 point [-]

The omitted information in this approach is information with a high Kolmogorov complexity, which is omitted in favor of information with low Kolmogorov complexity. A very rough analogy would be to describe humans as having a bias towards ideas expressible in few words of English in favor of ideas that need many words of English to express. Using Kolmogorov complexity for sequence prediction instead of English language for ideas in the construction gets rid of the very many problems of rigor involved in the latter, but the basic idea is pretty much the same. You look into things that are briefly expressible in favor of things that must be expressed in length. The information isn't permanently omitted, it's just depriorized. The algorithm doesn't start looking at the stuff you need long sentences to describe before it has convinced itself that there are no short sentences that describe the observations it wants to explain in a satisfactory way.

One bit of context that is assumed is that the surrounding universe is somewhat amenable to being Kolmogorov-compressed. That is, there are some recurring regularities that you can begin to discover. The term "lawful universe" sometimes thrown around in LW probably refers to something similar.

Solomonoff's universal induction would not work in a completely chaotic universe, where there are no regularities for Kolmogorov compression to latch on. You'd also be unlikely to find any sort of native intelligent entities in such universes. I'm not sure if this means that the Solomonoff approach is philosophically untenable, but needing to have some discoverable regularities to begin with before discovering regularities with induction becomes possible doesn't strike me as that great a requirement.

If the problem of context is about exactly where you draw the data for the sequence which you will then try to predict with Solomonoff induction, in a lawless universe you wouldn't be able to infer things no matter which simple instrumentation you picked, while in a lawful universe you could pick all sorts of instruments, tracking the change of light during time, tracking temperature, tracking the luminousity of the Moon, for simple examples, and you'd start getting Kolmogorov-compressible data where the induction system could start figuring repeating periods.

The core thing "independent of context" in all this is that all the universal induction systems are reduced to basically taking a series of numbers as input, and trying to develop an efficient predictor for what the next number will be. The argument in the paper is that this construction is basically sufficient for all the interesting things an induction solution could do, and that all the various real-world cases where induction is needed can be basically reduced into such a system by describing the instrumentation which turns real-world input into a time series of numbers.

Comment author: Tuukka_Virtaperko 15 February 2012 08:35:33PM 1 point [-]

Okay. In this case, the article does seem to begin to make sense. Its connection to the problem of induction is perhaps rather thin. The idea of using low Kolmogorov complexity as justification for an inductive argument cannot be deduced as a theorem of something that's "surely true", whatever that might mean. And if it were taken as an axiom, philosophers would say: "That's not an axiom. That's the conclusion of an inductive argument you made! You are begging the question!"

However, it seems like advancements in computation theory have made people able to do at least remotely practical stuff on areas, that bear resemblance to more inert philosophical ponderings. That's good, and this article might even be used as justification for my theory RP - given that the use of Kolmogorov complexity is accepted. I was not familiar with the concept of Kolmogorov complexity despite having heard of it a few times, but my intuitive goal was to minimize the theory's Kolmogorov complexity by removing arbitrary declarations and favoring symmetry.

I would say, that there are many ways of solving the problem of induction. Whether a theory is a solution to the problem of induction depends on whether it covers the entire scope of the problem. I would say this article covers half of the scope. The rest is not covered, to my knowledge, by anyone else than Robert Pirsig and experts of Buddhism, but these writings are very difficult to approach analytically. Regrettably, I am still unable to publish the relativizability article, which is intended to succeed in the analytic approach.

In any case, even though the widely rejected "statistical relevance" and this "Kolmogorov complexity relevance" share the same flaw, if presented as an explanation of inductive justification, the approach is interesting. Perhaps, even, this paper should be titled: "A Formalization of Occam's Razor Principle". Because that's what it surely seems to be. And I think it's actually an achievement to formalize that principle - an achievement more than sufficient to justify the writing of the article.

Comment author: Risto_Saarelma 14 January 2012 06:39:28PM 3 points [-]

We have managed to create such a sophisticated brain scanner, that it can tell whether a person is thinking of a cat or not. Someone is put into the machine, and the machine outputs that the person is not thinking of a cat. The person objects and says that he is thinking of a cat. What will the observing AI make of that inconsistency? What part of the observation is broken and results in nonconformity of the whole?

1) The brain scanner is broken 2) The person is broken In order to solve this problem, the AI may have to be able to conceptualize the fact that the brain scanner is a deterministic machine which simply accepts X as input and outputs Y. The scanner does not understand the information it is processing, and the act of processing information does not alter its structure. But the person is different.

I don't really understand this part.

"The scanner does not understand the information but the person does" sounds like some variant of Searle's Chinese Room argument when presented without further qualifiers. People in AI tend to regard Searle as a confused distraction.

The intelligent agent model still deals with deterministic machines that take input and produce output, but it incorporates the possibility of changing the agent's internal state by presenting the output function as just taking the entire input history X* as an input to the function that produces the latest output Y, so that a different history of inputs can lead to a different output on the latest input, just like it can with humans and more sophisticated machines.

I suppose the idea here is that there is some difference whether there is a human being sitting in the scanner, or, say, a toy robot with a state of two bits where one is I am thinking about cats and the other is I am broken and will lie about thinking about cats. With the robot, we could just check the "broken" bit as well from the scan when the robot is disagreeing with the scanner, and if it is set, conclude that the robot is broken.

I'm not seeing how humans must be fundamentally different. The scanner can already do the extremely difficult task of mapping a raw brain state to the act of thinking about a cat, it should also be able to tell from the brain state whether the person has something going on in their brain that will make them deny thinking about a cat. Things being deterministic and predictable from knowing their initial state doesn't mean they can't have complex behavior reacting to a long history of sensory inputs accompanied by a large amount of internal processing that might correspond quite well to what we think of as reflection or understanding.

Sorry I keep skipping over your formalism stuff, but I'm still not really grasping the underlying assumptions behind this approach. (The underlying approach in the computer science approach are, roughly, "the physical world exists, and is made of lots of interacting, simple, Turing-computable stuff and nothing else", "animals and humans are just clever robots made of the stuff", "magical souls aren't involved, not even if they wear a paper bag that says 'conscious experience' on their head")

The whole philosophical theory of everything thing does remind me of this strange thing from a year ago, where the building blocks for the theory were made out of nowadays more fashionable category theory rather than set theory though.

Comment author: Tuukka_Virtaperko 08 February 2012 11:59:46AM 1 point [-]

I've read some of this Universal Induction article. It seems to operate from flawed premises.

If we prescribe Occam’s razor principle [3] to select the simplest theory consistent with the training examples and assume some general bias towards structured environments, one can prove that inductive learning “works”. These assumptions are an integral part of our scientific method. Whether they admit it or not, every scientist, and in fact every person, is continuously using this implicit bias towards simplicity and structure to some degree.

Suppose the brain uses algorithms. An uncontroversial supposition. From a computational point of view, the former citation is like saying: "In order for a computer to not run a program, such as Indiana Jones and the Fate of Atlantis, the computer must be executing some command to the effect of "DoNotExecuteProgram('IndianaJonesAndTheFateOfAtlantis')".

That's not how computers operate. They just don't run the program. They don't need a special process for not running the program. Instead, not running the program is "implicitly contained" in the state of affairs that the computer is not running it. But this notion of implicit containment makes no sense for the computer. There are infinitely many programs the computer is not running at a given moment, so it can't process the state of affairs that it is not running any of them.

Likewise, the use of an implicit bias towards simplicity cannot be meaningfully conceptualized by humans. In order to know how this bias simplifies everything, one would have to know, what information regarding "everything" is omitted by the bias. But if we knew that, the bias would not exist in the sense the author intends it to exist.

Furthermore:

This is in some way a contradiction to the well-known no-free-lunch theorems which state that, when averaged over all possible data sets, all learning algorithms perform equally well, and actually, equally poorly [11]. There are several variations of the no-free-lunch theorem for particular contexts but they all rely on the assumption that for a general learner there is no underlying bias to exploit because any observations are equally possible at any point. In other words, any arbitrarily complex environments are just as likely as simple ones, or entirely random data sets are just as likely as structured data. This assumption is misguided and seems absurd when applied to any real world situations. If every raven we have ever seen has been black, does it really seem equally plausible that there is equal chance that the next raven we see will be black, or white, or half black half white, or red etc. In life it is a necessity to make general assumptions about the world and our observation sequences and these assumptions generally perform well in practice.

The author says that there are variations of the no free lunch theorem for particular contexts. But he goes on to generalize that the notion of no free lunch theorem means something independent of context. What could that possibly be? Also, such notions as "arbitrary complexity" or "randomness" seem intuitively meaningful, but what is their context?

The problem is, if there is no context, the solution cannot be proven to address the problem of induction. But if there is a context, it addresses the problem of induction only within that context. Then philosophers will say that the context was arbitrary, and formulate the problem again in another context where previous results will not apply.

In a way, this makes the problem of induction seem like a waste of time. But the real problem is about formalizing the notion of context in such a way, that it becomes possible to identify ambiguous assumptions about context. That would be what separates scientific thought from poetry. In science, ambiguity is not desired and should therefore be identified. But philosophers tend to place little emphasis on this, and rather spend time dwelling on problems they should, in my opinion, recognize as unsolvable due to ambiguity of context.

View more: Next