All of Ian_Ryan's Comments + Replies

But could you really have saved $100 by having decided to buy that same exact house except without that extra square foot?

6fiddlemath
Probably not. But, if you had rather less stuff, you could have probably bought a pretty similar house with one fewer closet for a few thousand less.

Yes. Sorry for the unintended ambiguity.

I see. No problem.

By the way, do you have an opinion on whether it's good or bad that nobody in the AI community seems to employ FPE?

2RHollerith
I find that I must retract my statement (in great great grandparent) that first-person epistemology (FPE) is not used in the writings in AI around here. In particular, one of the most heavily referenced lines of AI research around here is AIXI, which is essentially a single equation, namely, http://www.hutter1.net/ai/aixi1linel.gif (along with a mathematical tome's worth of exposition to explain the significance of the equation). There are four kinds of "points" or bound variables in the equation: computer programs (represented by the bound variable q), rewards (r), actions (a) and observations (o). If you examine the text surrounding the above equation, you find that the author gives "a camera image" as an example of an observation. I haven't read Hume, but given what you have said about FPE above, this AIXI formalism seems like an instance of FPE. I realize that it is unlikely that you want to learn enough math to understand this AIXI formalism, but I felt I had to bring AIXI up to stop the propagation of the probably-false information I had introduced in great great grandparent. Note that I probably do not have time to learn anything new about philosophy or to explain how AIXI might relate to the philosophical traditions or lines you are interested in.

Certainly not the writings in AI discussed on LW. Probably not any other writings either.

Isn't that what I said? I don't get what you're trying to say here.

ETA: Oh, are you responding to "perhaps they all employ FPE like it's nothing"? At first, I thought you were responding to "I'm not well-read in AI".

1RHollerith
Yes. Sorry for the unintended ambiguity.

First, a couple general considerations:

  • How far are you in the book? If you're stuck in Part II, I would recommend skipping to its last section. In my opinion, for somebody just starting out with his philosophy, the rest of that part simply isn't insightful enough to justify how difficult it is to read. Save it for later, if at all.

  • Remember that he wrote it over 200 years ago. You'll have to spend a lot of time getting fluent in his idiosyncratic 18th century English to really get what he's saying. I find that sort of thing interesting, so it was actual

... (read more)
1RHollerith
Certainly not the writings in AI discussed on LW. Probably not any other writings either.
2atucker
Hitchhiker's Guide to the Galaxy reference.

Everybody's always citing Hume, but nobody ever seems to do him any justice. The OP is simply yet another example in this trend. I have no idea whether after reading the first paragraph of your post, Hume would agree that he couldn't "bring himself to seriously doubt the content of his own subjective experience", but I'm pretty sure that by the end of it, he would summarily reject your interpretation of his epistemology.

First of all, to make what I'm saying at least sound plausible, I need only give you one counter-example:

  • He referred to our pr
... (read more)
1Barry_Cotter
I'm currently reading the Treatise and I have the Dialogues and Inquiry on my table for after that. So far I get the impression that I would be far better off reading some cog sci, and PT:TLOS, maybe with some physics. Am I wrong?

Very interesting suggestion. Thanks.

By the way, in that word language, I simply have a group of 4 grammatical particles, each referring to 1 of the 4 set operations (union, intersection, complement, and symmetric difference). That simplifies a few of the systems that we find in English or whatever. For example, we don't find intersection only in the relationship between a noun and an adjective; we also find it in a bunch of other places. Here's a list of a bunch of examples of where we see one of the set operations in English:

  • There's a deer over there, and he looks worried. (intersection)
... (read more)

I think that most of the potential lies in the "extra-radical possibilities". The traditional linguistics descriptions (adjectives, nouns, prepositions, and so on) don't seem to apply very well to any of my word languages. After all, they're just a bunch of natural language components; they needn't show up in an artificial language.

For example, in one of my word languages, there's no distinction between nouns and adjectives (meaning that there aren't any nouns and adjectives, I guess). To express the equivalent of the phrase "stupid man"... (read more)

4erratio
There are Australian Aboriginal languages that work a lot like this, and in some ways go further. The equivalent of the sentence "Big is coming" would be perfectly grammatical in Dyirbal, with the big thing(s) to be determined through the surrounding context. In some other languages, there's little or no distinction between adjectives and verbs, so the English "this car is pink" would be translated to something more like "this car pinks" Basically what I'm saying is that a large number of the more obvious "extra-radical possibilities" are already implemented in existing languages, albeit not in the overstudied languages of Europe.

Sorry, I should have said that it's not necessarily the same animal. The whole mountain of evidence concerns natural languages, right? Do you have any evidence that an artificial language with a self-segregating morphology and a simple sound structure would also go through the same changes?

So I'm not necessarily saying that the changes wouldn't occur; I'm simply saying that we can't reject out of hand the idea that we could build a system where they won't occur, or at least build a system where they would occur in a useful way (rather than a way that would... (read more)

Thanks for the link. Yeah, that's one of the ideas. It's still in its infancy though, so I don't have anything to show off.

The flow thing was just an example. The point was simply to illustrate that we shouldn't reject out of hand the idea that an ordinary artificial language (as opposed to mathematical notation or something) could retain its regularity.

The point is simply that the evolution of the language directly depends on how it starts, which means that you could design in such a way that it drives its evolution in a useful way. Just because it would evolve doesn't mean that it would lose its regularity. The flow thing is just one example of many. If it flows well, that's simply one thing to not have to worry about.

However, there are thousands of human languages, which have all been changing their pronunciation for (at least) tens of thousands of years in all kinds of ways, and they keep changing as we speak. If such a happy fixed point existed, don't you think that some of them would have already hit it by now?

No, I don't. Evolution is always a hack of what came before it, whereas scrapping the whole thing and starting from scratch doesn't suffer from that problem. I don't need to hack an existing structure; I can build exactly what I want right now.

Here's an exc... (read more)

0Vladimir_M
How do you know that? To support this claim, you need a model that predicts the actually occurring sound changes in natural languages, and also that sound changes would not occur in a language with self-segregating morphology. Do you have such a model? If you do, I'd be tremendously curious to see it.
3erratio
Only until phonological changes, morphological erosion, cliticisation, and sundry other processes take place. And whether and how those processes happen isn't related to how well the phonology flows, either, as far as I can tell.

If you build an artificial word language, you could make it in such a way that it would drive its own evolution in a useful way. A few examples:

  • If you make a rule available to derive a word easily, it would be less likely that the user would coin a new one.
  • If you also build a few other languages with a similar sound structure, you could make it super easy to coin new words without messing up the sound system.
  • If you make the sound system flow well enough, it would be unlikely that anybody would truncate the words to make it easier to pronounce or whatev
... (read more)
2Vladimir_M
If you plan to construct a language akin to programming languages or mathematical formulas, i.e. one that is fully specified by a formal grammar and requires slow and painstaking effort for humans to write or decode, then yes, clearly you can freeze it as an unchangeable standard. (Though of course, devising such a language that is capable of expressing something more general is a Herculean task, which I frankly don't consider feasible given the present state of knowledge.) On the other hand, if you're constructing a language that will be spoken by humans fluently and easily, there is no way you can prevent it from changing in all sorts of unpredictable ways. For example, you write: However, there are thousands of human languages, which have all been changing their pronunciation for (at least) tens of thousands of years in all kinds of ways, and they keep changing as we speak. If such a happy fixed point existed, don't you think that some of them would have already hit it by now? The exact mechanisms of phonetic change are still unclear, but a whole mountain of evidence indicates that it's an inevitable process. Similar could be said about syntax, and pretty much any other aspect of grammar. Look at it this way: the fundamental question is whether your artificial language will use the capabilities of the human natural language hardware. If yes, then it will have to change to be compatible with this hardware, and will subsequently share all the essential properties of natural languages (which are by definition those that are compatible with this hardware, and whose subset happens to be spoken around the world). If not, then you'll get a formalism that must be handled by the general computational circuits in the human brain, which means that its use will be very slow, difficult, and error-prone for humans, just like with programming languages and math formulas.

How are you so sure of all that stuff?

1Vladimir_M
If you specify in more detail which parts of what I wrote you dispute, I can provide a more detailed argument. As the simplest and most succinct argument against artificial languages with allegedly superior properties, I would make the observation that human languages change with time and ask: what makes you think that your artificial language won't also undergo change, or that the change will not be such that it destroys these superior properties?

For a few years now, I've been working on a project to build an artificial language. I strongly suspect that the future of the kind of communication that goes on here will belong to an artificial language. English didn't evolve for people like us. For our purpose, it's a cumbersome piece of shit, rife with a bunch of fallacies built directly into its engine. And I assume it's the same way with all the other ones. For us, they're sick to the core.

But I should stress that I don't think the future will belong to any kind of word language. English is a word la... (read more)

2Nisan
Is it by any chance a nonlinear fully two-dimensional writing system?
5Vladimir_M
I don't want to sound disrespectful towards your efforts, but to be blunt, artificial languages intended for communication between people are a complete waste of time. The reason is that human language ability is based on highly specialized hardware with a huge number of peculiarities and constraints. There is a very large space for variability within those, of course, as is evident from the great differences between languages, but any language that satisfies them has roughly the same level of "problematic" features, such as irregular and complicated grammar, semantic ambiguities, literal meaning superseded by pragmatics in complicated and seemingly arbitrary ways, etc., etc. Now, another critical property of human languages is that they change with time. Usually this change is very slow, but if people are forced to communicate in a language that violates the natural language constraints in some way, that language will quickly and spontaneously change into a new natural language that fits them. This is why attempts to communicate in regularized artificial languages are doomed, because a spontaneous, unconscious, and irresistible process will soon turn the regular artificial language into a messy natural one. Of course, it does make sense to devise artificial languages for communication between humans and non-human entities, as evidenced by computer programming languages or standardized dog commands. However, as long as they have the same brain hardware, humans are stuck with the same old natural languages for talking to each other.

But isn't being wary of coming off as lazy or unconstructive different than being afraid to make mistakes? The former seems desirable; the latter not so much.

I haven't been here long enough to verify whether that norm (that it doesn't require bravery to admit a mistake) really is in place, but assuming it is, I'm sure I'll enjoy my stay!

Indeed, if there's anything that could make or break your rationality in one shot, it's whether you're afraid to make mistakes. There's no influence more corrupting than being afraid to screw up, and there's nothing more liberating than being free of that fear.

I like this post, if only because it cuts through the standard confusion between feeling as if doing something particular would be morally wrong, and thinking that. The former is an indicator like a taste or a headache, and the latter is a thought process like deciding it would be counterproductive to eat another piece of candy.

I don't know what the LW orthodoxy says on this issue; all I know is in general, it's pretty common for people to equivocate between moral feelings and moral thoughts until they end up believing something totally crazy. Nobody seems... (read more)