Comment author: David_Allen 28 August 2012 08:32:05PM 1 point [-]

I can help you when you are in the Portland area. Just let me know what you need.

Comment author: Mitchell_Porter 17 August 2012 10:12:10AM 0 points [-]

People have noticed circular dependencies among subdisciplines of philosophy before. A really well-known one is the cycle connecting ontology and epistemology: your epistemology should imply your ontology, and your ontology must permit your epistemology. More arcane is the interplay between phenomenology, epistemology, and methodology.

Your approach to ontology seems to combine these two cycles, with the p/e/m cycle being more fundamental. All ontological claims are said to be dependent on a cognitive context, and this justifies ontological relativism.

That's not my philosophy; I see the possibility of reaching foundations, and also the possibility of countering the relativistic influence of the p/e/m perspective, simply by having a good ontological account of what the p/e/m cycle is about. From this perspective, the cycle isn't an endless merry-go-round, it's a process that you iterate in order to perfect your thinking. You chase down the implications of one ology for another, and you keep that up until you have something that is complete and consistent.

Or until you discover the phenomenological counterpart of Gödel's theorem. In what you write I don't see a proof that foundations don't exist or can't be reached. Perhaps they can't, but in the absence of a proof, I see no reason to abandon cognitive optimism.

Comment author: David_Allen 18 August 2012 11:45:25PM 0 points [-]

A really well-known one is the cycle connecting ontology and epistemology: your epistemology should imply your ontology, and your ontology must permit your epistemology. More arcane is the interplay between phenomenology, epistemology, and methodology.

I have read many of your comments and I am uncertain how to model your meanings for 'ontology', 'epistemology' and 'methodology', especially in relation to each other.

Do you have links to sources that describe these types of cycles, or are you willing to describe the cycles you are referring to--in the process establishing the relationship between these terms?

Your approach to ontology seems to combine these two cycles, with the p/e/m cycle being more fundamental. All ontological claims are said to be dependent on a cognitive context, and this justifies ontological relativism.

The term "cycles" doesn't really capture my sense of the situation. Perhaps the sense of recurrent hypergraphs is closer.

Also, I do not limit my argument only to things we describe as cognitive contexts. My argument allows for any type of context of evaluation. For example an antennae interacting with a photon creates a context of evaluation that generates meaning in terms of the described system.

...and this justifies ontological relativism.

I think that this epistemology actually justifies something more like an ontological perspectivism, but it generalizes the context of evaluation beyond the human centric concepts found in relativism and perspectivism. Essentially it stops privileging human consciousness as the only context of evaluation that can generate meaning. It is this core idea that separates my epistemology from most of the related work I have found in epistemology, philosophy, linguistics and semiotics.

In what you write I don't see a proof that foundations don't exist or can't be reached.

I'm glad you don't see those proofs because I can't claim either point from the implied perspective of your statement. Your statement assumes that there exists an objective perspective from which a foundation can be described. The problem with this concept is that we don't have access to any such objective perspective. We can only identify the perspective as "objective" from some perspective... which means that the identified "objective" perspective depends upon the perspective that generated the label, rendering the label subjective.

You do provide an algorithm for finding an objective description:

I see the possibility of reaching foundations, and also the possibility of countering the relativistic influence of the p/e/m perspective, simply by having a good ontological account of what the p/e/m cycle is about. From this perspective, the cycle isn't an endless merry-go-round, it's a process that you iterate in order to perfect your thinking. You chase down the implications of one ology for another, and you keep that up until you have something that is complete and consistent.

Again from this it seems that while you reject some current conclusions of science, you actually embrace scientific realism--that there is an external reality that can be completely and consistently described.

As long as you are dealing in terms of maps (descriptions) it isn't clear that to me that you ever escape the language hierarchy and therefore you are never free of Gödel's theorems. To achieve the level of completeness and consistency you strive for, it seems that you need to describe reality in terms equivalent to those it uses... which means you aren't describing it so much as generating it. If this description of a reality is complete then it is rendered in terms of itself, and only itself, which would make it a reality independent of ours, and so we would have no access to it (otherwise it would simply be a part of our reality and therefore not complete). Descriptions of reality that generate reality aren't directly accessible by the human mind; any translation of these descriptions to human accessible terms would render the description subject to Gödel's theorems.

I see no reason to abandon cognitive optimism.

I don't want anybody to abandon the search for new and better perspectives on reality just because we don't have access to an objective perspective. But by realizing that there are no objective perspectives we can stop arguing about the "right" way of viewing all of reality and spend that time finding "good" or "useful" ways to view parts of it.

Comment author: Mitchell_Porter 10 August 2012 07:28:51AM -2 points [-]

My original formulation is that AI = state-machine materialism = computational epistemology = a closed circle. However, it's true that you could have an AI which axiomatically imputes a particular phenomenology to the physical states, and such an AI could even reason about the mental life associated with transhumanly complex physical states, all while having no mental life of its own. It might be able to tell us that a certain type of state machine is required in order to feel meta-meta-pain, meta-meta-pain being something that no human being has ever felt or imagined, but which can be defined combinatorically as a certain sort of higher-order intentionality.

However, an AI cannot go from just an ontology of physical causality, to an ontology which includes something like pain, employing only computational epistemology. It would have to be told that state X is "pain". And even then it doesn't really know that to be in state X is to feel pain. (I am assuming that the AI doesn't possess consciousness; if it does, then it may be capable of feeling pain itself, which I take to be a prerequisite for knowing what pain is.)

Comment author: David_Allen 15 August 2012 12:02:43AM *  0 points [-]

Continuing my argument.

It appears to me that you are looking for an ontology that provides a natural explanation for things like "qualia" and "consciousness" (perhaps by way of phenomenology). You would refer to this ontology as the "true ontology". You reject Platonism "an ontology which reifies mathematical or computational abstractions", because things like "qualia" are absent.

From my perspective, your search for the "true ontology"--which privileges the phenomenological perspective of "consciousness"--is indistinguishable from the scientific realism that you reject under the name "Platonism"--which (by some accounts) privileges a materialistic or mathematical perspective of everything.

For example, using a form of your argument I could reject both of these approaches to realism because they fail to directly account for the phenomenological existence of SpongeBob SquarePants, and his wacky antics.

Much of what you have written roughly matches my perspective, so to be clear I am objecting to the following concepts and many of the conclusions you have drawn from them:

  • "true ontology"
  • "true epistemology"
  • "Consciousness objectively exists"

I claim that variants of antirealism have more to offer than realism. References to "true" and "objective" have implied contexts from which they must be considered, and without those contexts they hold no meaning. There is nothing that we can claim to be universally true or objective that does not have this dependency (including this very claim (meta-recursively...)). Sometimes this concept is stated as "we have no direct access to reality".

So from what basis can we evaluate "reality" (whatever that is)? We clearly are evaluating reality from within our dynamic existence, some of which we refer to as consciousness. But consciousness can't be fundamental, because its identification appears to depend upon itself performing the identification; and a description of consciousness appears to be incomplete in that it does not actually generate the consciousness it describes.

Extending this concept a bit, when we go looking for the "reality" that underpins our consciousness, we have to model that it based in terms of our experience which is dependent upon... well it depends on our consciousness and its dynamic dependence on "reality". Also, these models don't appear to generate the phenomenon they describe, and so it appears that circular reasoning and incompleteness are fundamental to our experience.

Because of this I suggest that we adopt an epistemology that is based on the meta-recursive dependence of descriptions on dynamic contexts. Using an existing dynamic context (such as our consciousness) we can explore reality in the terms that are accessible from within that context. We may not have complete objective access to that context, but we can explore it and form models to describe it, from inside of it.

We can also form new dynamic contexts that operate in terms of the existing context, and these newly formed inner contexts can interact with each in terms of dynamic patterns of the terms of the existing context. From our perspective we can only interact with our child contexts in the terms of the existing context, but the inner contexts may be generating internal experiences that are very different than those existing outside of it, based on the interaction of the dynamic patterns we have defined for them.

Inverting this perspective, then perhaps our consciousness is formed from the experiences generated from the dynamic patterns formed within an exterior context, and that context is itself generated from yet another set of interacting dynamic patterns... and so on. We could attempt to identify this nested set of relationships as its own ontology... only it may not actually be so well structured. It may actually be organized more like a network of partially overlapping contexts, where some parts interact strongly and other parts interact very weakly. In any case, our ability to describe this system will depend heavily on the dynamic perspective from which we observe the related phenomenon; and our perspective is of course embedded within the system we are attempting to describe.

I am not attempting to confuse the issues by pointing out how complex this can be. I am attempting to show a few things:

  • There is no absolute basis, no universal truth, no center, no bottom layer... from our perspective which is embedded in the "stuff of reality". I make no claims about anything I don't have access to.
  • Any ontology or epistemology will inherently be incomplete and circularly self-dependent, from some perspective.
  • The generation of meaning and existence is dependent on dynamic contexts of evaluation. When considering meaning or existence it is best to consider them in the terms of the context that is generating them.
  • Some models/ontologies/epistemologies are better than others, but the label "better" is dependent on the context of evaluation and is not fundamental.
  • The joints that we are attempting to carve the universe at are dependent upon the context of evaluation, and are not fundamental.
  • Meaning and existence are dynamic, not static. A seemingly static model is being dynamically generated, and stops existing when that modeling stops.
  • Using a model of dynamic patterns, based in terms of dynamic patterns we might be able to explain how consciousness emerges from non-conscious stuff, but this model will not be fundamental or complete, it will simply be one way to look at the Whole Sort of General Mish Mash of "reality".

To apply this to your "principle of non-vagueness". There is no reason to expect that mapping between pairs of arbitrary perspectives--between physical and phenomenological states in this case--is necessarily precise (or even meaningful). Simply because they are two different ways of describing arbitrary slices of "reality" means that they may refer to not-entirely overlapping parts of "reality". Certainly physical and phenomenological states are modeled and measured in very different ways, so a great deal of non-overlap caused uncertainty/vagueness should be expected.

And this claim:

But as I have argued, not only must the true ontology be deeper than state-machine materialism, there is no way for an AI employing computational epistemology to bootstrap to a deeper ontology.

Current software is rarely programmed to directly model state-machines. It may be possible to map the behavior of existing systems to state machines, but it is not generally the perspective generally held by the programmers, or by the dynamically running software. The same is true for current AI, so from that perspective your claim seems a bit odd to me. The perspective that an AI can be mapped to a state-machine is based on a particular perspective on the AI involved, but in fact that mapping does not discount that the AI is implemented within the same "reality" that we are. If our physical configuration (from some perspective) allows us to generate consciousness then there is no general barrier that should prevent AI systems from achieving a similar form of consciousness.

I recognize that these descriptions that may not bridge our inference gap; in fact they may not even properly encode my intended meaning. I can see that you are searching for an epistemology that better encodes for your understanding of the universe; I'm just tossing in my thoughts to see if we can generate some new perspectives.

Comment author: Mitchell_Porter 10 August 2012 07:28:51AM -2 points [-]

My original formulation is that AI = state-machine materialism = computational epistemology = a closed circle. However, it's true that you could have an AI which axiomatically imputes a particular phenomenology to the physical states, and such an AI could even reason about the mental life associated with transhumanly complex physical states, all while having no mental life of its own. It might be able to tell us that a certain type of state machine is required in order to feel meta-meta-pain, meta-meta-pain being something that no human being has ever felt or imagined, but which can be defined combinatorically as a certain sort of higher-order intentionality.

However, an AI cannot go from just an ontology of physical causality, to an ontology which includes something like pain, employing only computational epistemology. It would have to be told that state X is "pain". And even then it doesn't really know that to be in state X is to feel pain. (I am assuming that the AI doesn't possess consciousness; if it does, then it may be capable of feeling pain itself, which I take to be a prerequisite for knowing what pain is.)

Comment author: David_Allen 10 August 2012 05:28:39PM 0 points [-]

The contexts from which you identify "state-machine materialism" and "pain" appear to be very different from each other, so it is no surprise that you find no room for "pain" within your model of "state-machine materialism".

You appear to identify this issue directly in this comment:

My position is that a world described in terms of purely physical properties or purely computational properties does not contain qualia. Such a description itself would contain no reference to qualia.

Looking for the qualia of "pain" in a state-machine model of a computer is like trying to find out what my favorite color is by using a hammer to examine the contents of my head. You are simply using the wrong interface to the system.

If you examine the compressed and encrypted bit sequence stored on a DVD as a series of 0 and 1 characters, you will not be watching the movie.

If you don't understand the Russian language, then for a novel written in Russian you will not find the subtle twists of plot compelling.

If you choose some perspectives on Searle's Chinese room thought experiment you will not see the Chinese speaker, you will only see the mechanism that generates Chinese symbols.

So stuff like "qualia", "pain", "consciousness", and "electrons" only exist (hold meaning) from perspectives that are capable of identifying them. From other perspective they are non-existent (have no meaning).

If you chose a perspective on "conscious experience" that requires a specific sort of physical entity to be present, then a computer without that will never qualify as "conscious", for you. Others may disagree, perhaps pointing out aspects of its responses to them, or how some aspects of the system are functionally equivalent to the physical entity you require. So, which is the right way to identify consciousness? To figure that out you need to create a perspective from which you can identify one as right, and the other as wrong.

Comment author: Grognor 08 August 2012 02:19:42PM 9 points [-]

Parts of this I think are brilliant, other parts I think are absolute nonsense. Not sure how I want to vote on this.

there is no way for an AI employing computational epistemology to bootstrap to a deeper ontology.

This strikes me as probably true but unproven.

My own investigations suggest that the tradition of thought which made the most progress in this direction was the philosophical school known as transcendental phenomenology.

You are anthropomorphizing the universe.

Comment author: David_Allen 09 August 2012 05:25:18PM 0 points [-]

there is no way for an AI employing computational epistemology to bootstrap to a deeper ontology.

This strikes me as probably true but unproven

It seems possible for an AI to engage in a process of search within the ontological Hilbert space. It may not be efficient, but a random search should make all parts of any particular space accessible, and a random search across a Hilbert space of ontological spaces should make other types of ontological spaces accessible, and a random search across a Hilbert space containing Hilbert spaces of ontological spaces should... and on up the meta-chain. It isn't clear why such a system wouldn't have access to any ontology that is accessible by the human mind.

Comment author: Vladimir_M 29 May 2012 07:09:08AM *  22 points [-]

[E]veryone who is renting a house is renting it from someone who bought it, who is presumably not losing money on the deal. (Or is that a false presumption? Do landlords typically spend more to purchase and maintain their property than they make in rental income? How could that possibly be true?)

You can also ask a different question. If you borrow money to buy a house, you must find a lender willing to lend you at some interest rate. The interest rate is nothing but the price of renting money. So if it costs less to borrow (i.e. rent) the money to buy a house than to just rent the house directly, then how can the lender possibly be willing to lend you the money instead of investing it into a house himself and earning a rent higher than your interest?

When I make this argument, people usually try to argue that somehow you profit from buying by building equity with time. But if the money rent, i.e. interest, is equal to the house rent, then to build equity, you must make payments to the lender above this basic rent/interest rate -- otherwise you'll just keep renting the same amount of money indefinitely. And if you rent the house instead of making these higher payments, you can save and invest this difference, with the same positive effect on your net worth (which will also have an effect equivalent to the reduction in payments as the principal gets lower). Of course, this isn't true if the interest is lower than the rent, but then we get to the above question of why anyone would be so irrational as to lend at such terms. It also isn't true if the house price grows faster than any alternative investment -- but even ignoring the lessons from recent history, this again gets us to the question why someone would ever lend you the money at this cheap interest rate instead of investing the money himself into these fast-appreciating houses.

What these considerations show is that according to the textbook spherical-cow microeconomics, on a free market for housing, renting and buying should be equally good deals, since in efficient markets there is no possibility of arbitrage. And buying can be profitable over renting only if there is a strange opportunity for arbitrage where it's cheap to rent money but expensive to rent a house, even though money and houses are readily convertible into each other. A similar argument can of course be made against the possible advantage of renting -- except for the issues of risk-aversion and asset diversification, which decisively favor renting over owning.

In reality, of course, these simple spherical-cow models don't work, and there are lots of complicated and ill-understood factors involved, including all sorts of people's biases and signaling issues, high transaction costs, Knightian uncertainties, exuberant speculation, and not the least of all, huge government interference in the market by various subsidies, regulations, and other convoluted and dubious enterprises. The result is a complicated mess in which an accurate analysis of what's really going on is practically impossible, and in which there may indeed be possibilities for arbitrage.

However, regardless of all that, it seems to me that buying has some tremendous drawbacks, for which I can't see comparable upsides under any realistic circumstances. The first and foremost is that you're investing the bulk of your net worth (and on top of that a huge pile of borrowed money) into a single non-diversified asset, which seems like a crazy idea by the most basic principles of sound personal finance. [1] For various other drawbacks, one could perhaps argue that they are offset by the downsides of renting (though I would disagree), but this one really seems to me by itself like a decisive argument against getting into house ownership.


[1] Note that this is one possible solution to your landlord puzzle. The tenant may want to pay a premium to avoid placing most of his net worth into this asset because of risk-aversion, while for the (rich or corporate) landlord, it's just another item in a large portfolio with the risk well spread.

Comment author: David_Allen 29 May 2012 09:42:02PM 5 points [-]

However, regardless of all that, it seems to me that buying has some tremendous drawbacks, for which I can't see comparable upsides under any realistic circumstances.

Before I bought my house I ran the numbers and came to the same conclusion, that home ownership would not maximize my net worth and would increase certain types of risk. As a result I see home ownership as a luxury, not as an investment. I bought my house because I wanted it as a luxury and believed I could manage the risk.

Comment author: listic 27 May 2012 01:08:27PM *  1 point [-]

I would like to ask the commentators: what do you think about learning JavaScript as a "first" programming language? I would like to learn to use modern programming technologies and utilize best practices, but learn something quickly usable in the real world and applicable to web programming.

I was going to learn JavaScript for a while (but haven't got around to it) because:

  • I heard it's kinda Scheme on the inside, and generally has some really good parts
  • To do web programming, I need to learn JavaScript for client side anyway; with Node.JS I can utilize (and practice) the same language for server-side programming.
  • Node.JS seems to be a great framework for web programming, built with asynchronous/evented paradigm that should be good for doing... whatever stuff they are doing on the web?
  • Looks like Node.JS is slowly climbing to mainstream acceptance. I mean, I think I could really get a job with that outside of Silicon valley and Japan!

But I have heard so much advice to learn Python lately that I am thinking: am I missing something and being difficult?

It looks like lsparrish has been around and tried learning different languages before, so did I: I was paid to program in C and Forth. But I am a real beginner actually.

Comment author: David_Allen 27 May 2012 05:20:16PM 2 points [-]

JavaScript is fine as a first language. I consider it to be a better first language than the TRS-80 BASIC I started on.

Comment author: David_Allen 26 May 2012 04:22:07AM *  8 points [-]

Is it better to focus on one path, avoiding contamination from others?

Learning multiple programming languages will broaden your perspective and will make you a better and more flexible programmer over time.

Is it better to explore several simultaneously, to make sure you don't miss the best parts?

If you are new and learning on your own, you should focus on one language at a time. Pick a project to work on and then pick the language you are going to use. I like to code a Mandelbrot set image generator in each language I learn.

Which one results in converting time to dollars the most quickly?

If you make your dollars only from the finished product, then pick the language with the highest productivity for your target platform and problem domain. This will probably be a garbage collecting language with a clean syntax, with a good integrated development environment, and with a large available set of libraries.

Right now this will probably be Python, Java or C#.

If you make your dollars by producing lines of code for a company, then you will want to learn a language that is heavily used. There is generally a large demand for C++, C#, Java, Python, and PHP programmers. Companies in certain domains will focus on other languages like Lisp, Smalltalk and Ada.

Which one most reliably converts you to a higher value programmer over a longer period of time?

No single language will do this in the long run, but you might take temporary advantage of the current rise of Python, or the large install base of Java and C++.

For a broad basic education I suggest:

  • Learn a functional language. Haskell is my first choice; Lisp is my second choice.
  • Learn an object oriented language. Smalltalk has the best OO representation I have come across.
  • Learn a high level imperative language. Based on growth, Python appears to currently be the best choice; Java would be my second choice.
  • Learn an assembly language. Your platform of choice.

If you want to do web-related development:

  • HTML, CSS, Javascript.
  • SQL and relational DB.
  • XML, XSD, and XSLT.
  • C#.NET, Java, Python or PHP.

If you want to do engineering related development:

  • C and C++.
  • Perl
  • SQL
  • Mathematica or Matlab
  • for some domains, LabVIEW
Comment author: asr 16 April 2012 09:33:08PM *  0 points [-]

My claim is that this use of a base general purpose language is not necessary, and possibly not generally desirable. With an ecosystem of DSLs general purpose languages can be generated when needed, and DSLs can be generated using only other DSLs.

This seems like a bad idea. There is a high cognitive cost to learning a language. There is a high engineering cost to making different languages play nice together -- you need to figure out precisely what happens to types, synchronization, etc etc at the boundaries.

I suspect that breaking programs into pieces that are defined in terms of separate languages is lousy engineering. Among other things, traditional unix shell programming has very much this flavor -- a little awk, a little sed, a little perl, all glued together with some shell. And the outcome is usually pretty gross.

Comment author: David_Allen 17 April 2012 06:04:01AM 1 point [-]

These are well targeted critiques, and are points that must be addressed in my proposal. I will address these critiques here while not claiming that the approach I propose is immune to "bad design".

There is a high cognitive cost to learning a language.

Yes, traditional general purpose languages (GPLs) and many domain specific languages (DSLs) are hard to learn. There are a few reasons that I believe this can be allayed by the approach I propose. The DSLs I propose are (generally) small, composable, heavily reused, and interface oriented which is probably very different than the GPLs (and perhaps DSLs) from your experience. Also, I will describe what I call the encoding problem and map it between DSLs and GPLs to show why well chosen DSLs should be better.

In my model there will be heavy reuse of small (or even tiny) DSLs. The DSLs can be small because they can be composed to create new DSLs (via transparent implementations, heavy use of generics, transformation, and partial specialization). Composition allows each DSL to deal with a distinct and simple concern but yet be combined. Reuse is enhanced because many problem domains regardless of their abstraction level can be effectively modeled using common concerns. For example consider functions, Boolean logic, control structures, trees, lists, and sets. Cross-cutting concerns can be handled using the approaches of Aspect-oriented programming.

The small size of these commonly used DSLs, and their focused concerns make them individually easy to learn. The heavy reuse provides good leveraging of knowledge across projects and across scales and types of abstractions. Probably learning how to program with a large number of these DSLs will be the equivalent of learning a new GPL.

In my model DSLs are best thought of as interfaces, where the interface is customized to provide an efficient and easily understood method of manipulating solutions within the problem domain. In some cases this might be text based interfaces such as we commonly program in now, but it also could be graphs, interactive graphics, sound, touch, or EM signals; really any form of communication. The method and structure of communication is constrained by the interface, and is chosen to providing a useful (and low noise) perspective into the problem domain. Text base languages often come with a large amount of syntactic noise. (Ever try template based metaprogramming in C++? Ack!).

Different interfaces (DSLs) may provide different perspectives into the same solution space of a problem domain. For example a graph, and the data being graphed: the underlying data could be modified by interacting with either interface. The choice of interface will depend on the programmer's intention. This is also related to the concept of projectional editors, and can be enhanced with concepts like Example Centric Programming.

The encoding problem is the problem of transforming an abstract model (the solution) into code that represents it properly. If the solution is coded in a high-level DSL, then the description of the model that we create while thinking about the problem and talking to our customizers might actually represent the final top level code. In this case the cognitive cost of learning the DSL is the same as understanding the problem domain, and the cost of understanding the program is that of understanding the solution model. For well chosen DSLs the encoding problem will be easy to solve. In the case of general purpose languages the encoding problem can add arbitrary levels of complexity. In addition to understanding the problem domain and the abstract solution model, we also have to know how these are encoded into the general purpose language. This adds a great deal of learning effort even if we already know the language, and even if we find a library that allows us to code the solution relatively directly. Perhaps worse than the learning cost is the ongoing mental effort of encoding and decoding between the abstract models and the general purpose implementation. We have to be able to understand and modify the solution through an additional layer of syntactic noise. The extra complexity, the larger code size and the added cognitive load imposed by using general purpose languages multiplies the likelihood of bugs.

There is a high engineering cost to making different languages play nice together -- you need to figure out precisely what happens to types, synchronization, etc etc at the boundaries.

Boundary costs can be common and high even if you are lucky enough to get to program exclusively in a single general purpose language. Ever try to use functions from two different libraries on the same data? Image processing libraries and math libraries are notorious for custom memory representations, none of which seem to match my preferred representation of the same data. Two GUI libraries or stream I/O libraries will clobber each other's output. The costs (both development-time and run-time) to conform disparate interfaces in general purpose languages is outrageous. My proposal just moves these boundary costs to new (and perhaps unexpected) places while providing tools (DSLs for composition and transformation) that ease the effort of connecting the disparate interfaces.

I suspect that breaking programs into pieces that are defined in terms of separate languages is lousy engineering.

I've described my proposal as a perspective shift, and that interface might be a better term than language. To shift your perspective, consider the interfaces you have to your file system. You may have a command line interface to it, a GUI interface, and a programmatical interface (in your favorite language). You choose the appropriate interface based on the task at hand. The same is true for the interfaces I propose. You could use the file system in a complex way to perform perfectly good source code control, or you could rely on the simpler interface of a source control system. The source control system itself might simply rely on a complex structuring of the file system, but you don't really care how it works as long as it is easy to use and meets your needs. You could use CSV text files to store your data, but if you need to perform complex queries a database engine is probably a better choice.

We already break programs (stuff we do) into pieces that are defined in terms of separate languages (interfaces), and we consider this good engineering. My proposal is about how to successfully extend this type of separation of concerns to its granular and interconnected end-point.

Among other things, traditional unix shell programming has very much this flavor -- a little awk, a little sed, a little perl, all glued together with some shell. And the outcome is usually pretty gross.

Your UNIX shell programming example is well placed. It is roughly a model that matches my proposal with connected DSLs, but it is not a panacea (perhaps far from it). I will point out that the languages you mention (awk, sed, and perl) are all general purpose (Turing-complete) text based languages, which is far from the type of DSL I am proposing. Also the shell limits interaction between DSLs to character streams via pipes. This representation of communication rarely maps cleanly to the problem being solved; forcing the implementations to compensate. This generates a great deal of overhead in terms of cognitive effort, complexity, cost ($, development time, run-time), and in some sense a reduction of beauty in the Universe.

To highlight the difference between shell programming and the system I'm proposing, start with the shell programming model, but in addition to character streams add support for the communication of structured data, and in addition to pipes add new communication models like a directed graph communication model. Add DSLs that perform transformations on structured data, and DSLs for interactive interfaces. Now you can create sophisticated applications such as syntax sensitive editors while programming at a level that feels like scripting or perhaps like painting; and given the composability of my DSLs, the parts of this program could be optimized and specialized (to the hardware) together to run like a single, purpose built program.

Comment author: loup-vaillant 16 April 2012 01:24:14PM 1 point [-]

(Duplicate of this)

I believe that we should be programming using an ecosystem of domain specific languages.

If you haven't heard of the STEPS project from the Viewpoint Research Institute already, it may interest you. (Their last report is here)

Comment author: David_Allen 16 April 2012 07:32:22PM *  0 points [-]

Thank you for the reference to STEPS; I am now evaluating this material in some detail.

I would like to discuss the differences and similarities I see between their work and my perspective; are you are familiar enough with STEPS to discuss it from their point of view?

In reply to this:

Or by making a really convenient DSL factory. The only use for your "general purpose" language would be to write DSLs.

This use of a general purpose language also shows up in the current generation of language workbenches (and here). For example JetBrains' Meta Programming System uses a Java-like base language, and Intentional Software uses a C# (like?) base language.

My claim is that this use of a base general purpose language is not necessary, and possibly not generally desirable. With an ecosystem of DSLs general purpose languages can be generated when needed, and DSLs can be generated using only other DSLs.

View more: Prev | Next