You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

skeptical_lurker comments on [LINK] Author's Note 119: Shameless Begging - Less Wrong Discussion

7 Post author: Evan_Gaensbauer 11 March 2015 12:14AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (66)

You are viewing a single comment's thread. Show more comments above.

Comment author: skeptical_lurker 13 March 2015 11:18:57AM *  1 point [-]

There were specs for a programming language that would by its design 'do what i mean' that make my programmer friends laugh and complicated AI architectures and ideas for the social engineering they would do with the gigadollars that would be rolling in to bring about the singularity by 2010 so as to avoid the apocalyptic Nanowar that was coming.

Flare (the language) didn't sound that dumb to me - my impression wasn't that it would inherently 'do what i mean' but that it would somehow be both machine and human - readable, so that it would be easy to run advanced optimising compliers over it, and later would provide a natural basis for AI that could rewrite its own source code.

Looking back on it, this is way too much of a free lunch, and since an AI capable of understanding AI theory would probably also be able to parse the meaning of code written in conventional languages, its rather redundant. I still expect that 'do what i mean' languages will appear, for instance the language could detect 'obvious' mistakes, correct them and inform the user.

e.g. "x * y=z does not work because the dimensions do not match. Nor does x' * y=z, but x * y'=z does, so I have taken the liberty of changing your code to x * y'=z"

or "'inutaliseation' is not a function or variable. I assume you meant 'initialization', which is a function, and I corrected this mistake"

Eventually, it might evolve into a natural language to code translator.

But yes, a nanowar by 2010 wasn't the smartest idea.