ciphergoth comments on Welcome to Less Wrong! - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (1953)
Thank you, I'll be seeing you around :) .
Anyway, I have been thinking of starting my year off by reading Chris Langan's CTMU, but I haven't seen anything written about it here or on OB. And I am very wary of what I put into my brain (including LSD :P).
Any opinions on the CTMU?
Google suggests you mean this CTMU.
Looks like rubbish to me, I'm afraid. If what's on this site interests you, I think you'll get a lot more out of the Sequences, including the tools to see why the ideas in the site above aren't really worth pursuing.
Introduction to the CTMU
Yeah, I know what it looks like: meta-physical rubbish. But my dilemma is that Chris Langan is the smartest known living man, which makes it really hard for me to shrug the CTMU off as nonsense. Also, from what I skimmed, it looks like a much deeper examination of reductionism and strange loops, which are ideas that I hold to dearly.
I've read and understand the sequences, though I'm not familiar enough with them to use them without a rationalist context.
Eh, I'm smart too. Looks to me like you were right the first time and need to have greater confidence in yourself.
More to the point, you do not immediately fail the "common ground" test.
Pragmatically, I don't care how smart you are, but whether you can make me smarter. If you are so much smarter than I as to not even bother, I'd be wasting my time engaging your material.
I should note that the ability to explain things isn't the same attribute as intelligence. I am lucky enough to have it. Other legitimately intelligent people do not.
If your goal is to convey ideas to others, instrumental rationality seems to demand you develop that capacity.
Considering the extraordinary rarity of good explainers in this entire civilization, I'm saddened to say that talent may have something to do with it, not just practice.
I wonder what I should do. I'm smart, I seem to be able to explain things that I know to people well.. to my lament, I got the same problem as Thomas: I apparently suck at learning things so that they're internalized and in my long term memory.
I can learn from dead people, stupid people, or by watching a tree for an hour. I don't think I understand your point.
I didn't use the word "learn". My point is about a smart person conveying their ideas to someone. Taboo "smart". Distinguish ability to reach goals, and ability to score high on mental aptitude tests. If they are goal-smart, and their goal is to convince, they will use their iq-smarts to develop the capacity to convince.
Being very intelligent does not imply not being very wrong.
You just get to take bigger mistakes than others. From the youtube videos Langan looks like a really bright fellow that has a very broken toolbox, and little correction. Argh!
However intelligent he is, he fails to present his ideas so as to gradually build a common ground with lay readers. "If you're so smart, how come you ain't convincing?"
The "intelligent design" references on his Wikipedia bio are enough to turn me away. Can you point us to a well-regarded intellectual who has taken his work seriously and recommends his work? (I've used that sort of bridging tactic at least once, Dennett convincing me to read Julian Jaynes.)
"Convincing" has long been a problem for Chris Langan. Malcolm Gladwell relates a story about Langan attending a calculus course in first year undergrad. After the first lecture, he went to offer criticism of the prof's pedagogy. The prof thought he was complaining that the material was too hard; Langan was unable to convey that he had understood the material perfectly for years, and wanted to see better teaching.
It is. I got as far as this paragraph of the introduction to his paper before I found a critical flaw:
At this point, he's already begging the question, i.e. presupposing the existence of supernatural entities. These "laws" he's talking about are in his head, not in the world.
In other words, he hasn't even got done presenting what problem he's trying to solve, and he's already got it completely wrong, and so it's doubtful he can get to correct conclusions from such a faulty premise.
That's not a critical flaw. In metaphysics, you can't take for granted that the world is not in your head. The only thing you really can do is to find an inconsistency, if you want to prove someone wrong.
Langan has no problems convincing me. His attempt at constructing a reality theory is serious and mature and I think he conducts his business about the way an ordinary person with such aims would. He's not a literary genius like Robert Pirsig, he's just really smart otherwise.
I've never heard anyone to present such criticism of the CTMU that would actually imply understanding of what Langan is trying to do. The CTMU has a mistake. It's that Langan believes (p. 49) the CTMU to satisfy the Law Without Law condition, which states: "Concisely, nothing can be taken as given when it comes to cosmogony." (p. 8)
According to the Mind Equals Reality Principle, the CTMU is comprehensive. This principle "makes the syntax of this theory comprehensive by ensuring that nothing which can be cognitively or perceptually recognized as a part of reality is excluded for want of syntax". (p. 15) But undefinable concepts can neither be proven to exist nor proven not to exist. This means the Mind Equals Reality Principle must be assumed as an axiom. But to do so would violate the Law Without Law condition.
The Metaphysical Autology Principle could be stated as an axiom, which would entail the nonexistence of undefinable concepts. This principle "tautologically renders this syntax closed or self-contained in the definitive, descriptive and interpretational senses". (p. 15) But it would be arbitrary to have such an axiom, and the CTMU would again fail to fulfill Law Without Law.
If that makes the CTMU rubbish, then Russell's Principia Mathematica is also rubbish, because it has a similar problem which was pointed out by Gödel. EDIT: Actually the problem is somewhat different than the one addressed by Gödel.
Langan's paper can be found here EDIT: Fixed link.
To clarify, I'm not the generic "skeptic" of philosophical thought experiments. I am not at all doubting the existence of the world outside my head. I am just an apparently competent metaphysician in the sense that I require a Wheeler-style reality theory to actually be a Wheeler-style reality theory with respect to not having arbitrary declarations.
There might not be many people here to who are sufficiently up to speed on philosophical metaphysics to have any idea what a Wheeler-style reality theory, for example, is. My stereotypical notion is that the people at LW have been pretty much ignoring philosophy that isn't grounded in mathematics, physics or cognitive science from Kant onwards, and won't bother with stuff that doesn't seem readable from this viewpoint. The tricky thing that would help would be to somehow translate the philosopher-speak into lesswronger-speak. Unfortunately this'd require some fluency in both.
It's not like your average "competent metaphysicist" would understand Langan either. He wouldn't possibly even understand Wheeler. Langan's undoing is to have the goals of a metaphysicist and the methods of a computer scientist. He is trying to construct a metaphysical theory which structurally resebles a programming language with dynamic type checking, as opposed to static typing. Now, metaphysicists do not tend to construct such theories, and computer scientists do not tend to be very familiar with metaphysics. Metaphysical theories tend to be deterministic instead of recursive, and have a finite preset amount of states that an object can have. I find the CTMU paper a bit sketchy and missing important content besides having the mistake. If you're interested in the mathematical structure of a recursive metaphysical theory, here's one: http://www.moq.fi/?p=242
Formal RP doesn't require metaphysical background knowledge. The point is that because the theory includes a cycle of emergence, represented by the power set function, any state of the cycle can be defined in relation to other states and prior cycles, and the amount of possible states is infinite. The power set function will generate a staggering amount of information in just a few cycles, though. Set R is supposed to contain sensory input and thus solve the symbol grounding problem.
Of course the symbol grounding problem is rather important, so it doesn't really suffice to say that "set R is supposed to contain sensory input". The metaphysical idea of RP is something to the effect of the following:
Let n be 4.
R contains everything that could be used to ground the meaning of symbols.
N contains relations of purely abstract symbols.
Let ℘(T) be the power set of T.
The solving of the symbol grounding problem requires R and N to be connected. Let us assume that ℘(Rn) ⊆ Rn+1. R5 hasn't been defined, though. If we don't assume subsets of R to emerge from each other, we'll have to construct a lot more complicated theories that are more difficult to understand.
This way we can assume there are two ways of connecting R and N. One is to connect them in the same order, and one in the inverse order. The former is set O and the latter is set S.
O set includes the "realistic" theories, which assume the existence of an "objective reality".
The relationship between O and N:
S set includes "solipsistic" ideas in which "mind focuses to itself".
The relationship between S and N:
That's the metaphysical portion in a nutshell. I hope someone was interested!
We were talking about applying the metaphysics system to making an AI earlier in IRC, and the symbol grounding problem came up there as a basic difficulty in binding formal reasoning systems to real-time actions. It doesn't look like this was mentioned here before.
I'm assuming I'd want to actually build an AI that needs to deal with symbol grounding, that is, it needs to usefully match some manner of declarative knowledge it represents in its internal state to the perceptions it receives from the outside world and to the actions it performs on it. Given this, I'm getting almost no notion of what useful work this theory would do for me.
Mathematical descriptions can be useful for people, but it's not given that they do useful work for actually implementing things. I can define a self-improving friendly general artificial intelligence mathematically by defining
FAI = <S, P*>as an artificial intelligence instance, consisting of its current internal state S and the history of its perceptions up to the present P*,a: FAI -> A*as a function that gives the list of possible actions for a given FAI instanceu: A -> Realas a function that gives the utility of each action as a real number, with higher numbers given to actions that advance the purposes of the FAI better based on its current state and perception history andf: FAI * A -> S, Pas an update function that takes an action and returns a new FAI internal state with any possible self-modifications involved in the action applied, and a new perception item that contains whatever new observations the FAI made as a direct result of its action.And there's a quite complete mathematical description of a friendly artificial intelligence, you could probably even write a bit of neat pseudocode using the pieces there, but that's still not likely to land me a cushy job supervising the rapid implementation of the design at SIAI, since I don't have anything that does actual work there. All I did was push all the complexity into the black boxes of the
u,aandf.I also implied a computational approach where the system enumerates every possible action, evaluates them all and then picks a winner with how I decided to split up the definition. This is mathematically expedient, given that in mathematics any concerns of computation time can be pretty much waved off, but appears rather naive computationally, as it is likely that both coming up with possible actions and evaluating them can get extremely expensive in the artificial general intelligence domain.
With the metaphysics thing, beyond not getting a sense of it doing any work, I'm not even seeing where the work would hide. I'm not seeing black box functions that need to do an unknowable amount of work, just sets with strange elements being connected to other sets with strange elements. What should you be able to do with this thing?
You probably have a much more grassroot-level understanding of the symbol grounding problem. I have only solved the symbol grounding problem to the extent that I have formal understanding of its nature.
In any case, I am probably approaching AI from a point of view that is far from the symbol grounding problem. My theory does not need to be seen as an useful solution to that problem. But when an useful solution is created, I postulate it can be placed within RP. Such a solution would have to be an algorithm for creating S-type or O-type sets of members of R.
More generally, I would find RP to be useful as an extremely general framework of how AI or parts of AI can be constructed in relation to each other, ecspecially with regards to understanding lanugage and the notion of consciousness. This doesn't necessarily have anything to do with some more atomistic AI projects, such as trying to make a robot vacuum cleaner find its way back to the charging dock.
At some point, philosophical questions and AI will collide. Suppose the following thought experiment:
We have managed to create such a sophisticated brain scanner, that it can tell whether a person is thinking of a cat or not. Someone is put into the machine, and the machine outputs that the person is not thinking of a cat. The person objects and says that he is thinking of a cat. What will the observing AI make of that inconsistency? What part of the observation is broken and results in nonconformity of the whole?
In order to solve this problem, the AI may have to be able to conceptualize the fact that the brain scanner is a deterministic machine which simply accepts X as input and outputs Y. The scanner does not understand the information it is processing, and the act of processing information does not alter its structure. But the person is different.
RP should help with such problems because it is intended as an elegant, compact and flexible way of defining recursion while allowing the solution of the symbol grounding problem to be contained in the definition in a nontrivial way. That is, RP as a framework of AI is not something that says: "Okay, this here is RP. Just perform the function RP(sensory input) and it works, voilá." Instead, it manages to express two different ways of solving the symbol grounding problem and to define their accuracy as a natural number n. In addition, many emergence relations in RP are logical consequences of the way RP solves the symbol grounding problem (or, if you prefer, "categorizes the parts of the actual solution to the symbol grounding problem").
In the previous thought experiment, the AI should manage to understand that the scanner deterministically performs the operation ℘(R) ⊆ S, and does not define S in terms of anything else. The person, on the other hand, is someone whose information processing is based on RP or something similar.
But what you read from moq.fi is something we wrote just a few days ago. It is by no means complete.
Questions to you:
I will not guarantee having discussions with me is useful for attaining a good job. ;)
You can't rely too much on intelligence tests, especially in the super-high range. The tester himself admitted that Langan fell outside the design range of the test, so the listed score was an extrapolation. Further, IQ measurements, especially at the extremes and especially on only a single test (and as far as I could tell from the wikipedia article, he was only tested once) measure test-taking ability as much as general intelligence.
Even if he is the most intelligent man alive, intelligence does not automatically mean that you reach the right answer. All evidence points to it being rubbish.
Many smart people fool themselves in interesting ways thinking about this sort of thing. And of course, when predicting general intelligence based on IQ, remember to account for return to the mean: if there's such a thing as the smartest person in the world by some measure of general intelligence, it's very unlikely it'll be the person with the highest IQ.
A powerful computer with a bad algorithm or bad information can produce a high volume of bad results that are all internally consistent.
(IQ may not be directly analogous to computing power, but there are a lot of factors that matter more than the author's intelligence when assessing whether a model bears out in reality.)