Yes. Sorry for the unintended ambiguity.
I see. No problem.
By the way, do you have an opinion on whether it's good or bad that nobody in the AI community seems to employ FPE?
Certainly not the writings in AI discussed on LW. Probably not any other writings either.
Isn't that what I said? I don't get what you're trying to say here.
ETA: Oh, are you responding to "perhaps they all employ FPE like it's nothing"? At first, I thought you were responding to "I'm not well-read in AI".
First, a couple general considerations:
How far are you in the book? If you're stuck in Part II, I would recommend skipping to its last section. In my opinion, for somebody just starting out with his philosophy, the rest of that part simply isn't insightful enough to justify how difficult it is to read. Save it for later, if at all.
Remember that he wrote it over 200 years ago. You'll have to spend a lot of time getting fluent in his idiosyncratic 18th century English to really get what he's saying. I find that sort of thing interesting, so it was actual
Damn. Should've known.
Why? Just wondering.
Everybody's always citing Hume, but nobody ever seems to do him any justice. The OP is simply yet another example in this trend. I have no idea whether after reading the first paragraph of your post, Hume would agree that he couldn't "bring himself to seriously doubt the content of his own subjective experience", but I'm pretty sure that by the end of it, he would summarily reject your interpretation of his epistemology.
First of all, to make what I'm saying at least sound plausible, I need only give you one counter-example:
Very interesting suggestion. Thanks.
By the way, in that word language, I simply have a group of 4 grammatical particles, each referring to 1 of the 4 set operations (union, intersection, complement, and symmetric difference). That simplifies a few of the systems that we find in English or whatever. For example, we don't find intersection only in the relationship between a noun and an adjective; we also find it in a bunch of other places. Here's a list of a bunch of examples of where we see one of the set operations in English:
I think that most of the potential lies in the "extra-radical possibilities". The traditional linguistics descriptions (adjectives, nouns, prepositions, and so on) don't seem to apply very well to any of my word languages. After all, they're just a bunch of natural language components; they needn't show up in an artificial language.
For example, in one of my word languages, there's no distinction between nouns and adjectives (meaning that there aren't any nouns and adjectives, I guess). To express the equivalent of the phrase "stupid man"...
Sorry, I should have said that it's not necessarily the same animal. The whole mountain of evidence concerns natural languages, right? Do you have any evidence that an artificial language with a self-segregating morphology and a simple sound structure would also go through the same changes?
So I'm not necessarily saying that the changes wouldn't occur; I'm simply saying that we can't reject out of hand the idea that we could build a system where they won't occur, or at least build a system where they would occur in a useful way (rather than a way that would...
Thanks for the link. Yeah, that's one of the ideas. It's still in its infancy though, so I don't have anything to show off.
The flow thing was just an example. The point was simply to illustrate that we shouldn't reject out of hand the idea that an ordinary artificial language (as opposed to mathematical notation or something) could retain its regularity.
The point is simply that the evolution of the language directly depends on how it starts, which means that you could design in such a way that it drives its evolution in a useful way. Just because it would evolve doesn't mean that it would lose its regularity. The flow thing is just one example of many. If it flows well, that's simply one thing to not have to worry about.
However, there are thousands of human languages, which have all been changing their pronunciation for (at least) tens of thousands of years in all kinds of ways, and they keep changing as we speak. If such a happy fixed point existed, don't you think that some of them would have already hit it by now?
No, I don't. Evolution is always a hack of what came before it, whereas scrapping the whole thing and starting from scratch doesn't suffer from that problem. I don't need to hack an existing structure; I can build exactly what I want right now.
Here's an exc...
If you build an artificial word language, you could make it in such a way that it would drive its own evolution in a useful way. A few examples:
How are you so sure of all that stuff?
For a few years now, I've been working on a project to build an artificial language. I strongly suspect that the future of the kind of communication that goes on here will belong to an artificial language. English didn't evolve for people like us. For our purpose, it's a cumbersome piece of shit, rife with a bunch of fallacies built directly into its engine. And I assume it's the same way with all the other ones. For us, they're sick to the core.
But I should stress that I don't think the future will belong to any kind of word language. English is a word la...
But isn't being wary of coming off as lazy or unconstructive different than being afraid to make mistakes? The former seems desirable; the latter not so much.
I haven't been here long enough to verify whether that norm (that it doesn't require bravery to admit a mistake) really is in place, but assuming it is, I'm sure I'll enjoy my stay!
Indeed, if there's anything that could make or break your rationality in one shot, it's whether you're afraid to make mistakes. There's no influence more corrupting than being afraid to screw up, and there's nothing more liberating than being free of that fear.
I like this post, if only because it cuts through the standard confusion between feeling as if doing something particular would be morally wrong, and thinking that. The former is an indicator like a taste or a headache, and the latter is a thought process like deciding it would be counterproductive to eat another piece of candy.
I don't know what the LW orthodoxy says on this issue; all I know is in general, it's pretty common for people to equivocate between moral feelings and moral thoughts until they end up believing something totally crazy. Nobody seems...
But could you really have saved $100 by having decided to buy that same exact house except without that extra square foot?