hairyfigment comments on [LINK] Why I'm not on the Rationalist Masterlist - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (866)
I'm a programmer, and I doubt that AI is possible. Or, rather, I doubt that artificial intelligence will ever look that way to its creators. More broadly, I'm skeptical of 'intelligence' in general. It doesn't seem like a useful term.
I mean, there's a device down at the freeway that moves an arm up if you pay the toll. So, as a system, its got the ability to sense the environment (limited to the context of knowing if the coin verification system is satisfied with the payment), and affect that environment (raise and lower arm). Most folks would agree that that is not AI.
So, then, how can we get beyond that? It is a nonhuman reaction to the environment. Whatever I wrote that we called "AI", would presumably do what I program it to (and naught else) in response to its sensory input. A futuristic war drone's basket is its radar and its lever is its missiles, but there's nothing new going on here. A chat bot's basket is the incoming feed, and its lever is its outgoing text, but it's not like it 'chooses' in any sense more meaningful than the toll bot's decision matrix, what it sends out.
So maybe it could rewrite its own code. But if it does so, it'll only do so in the way that I've programmed it to. The paper clip maximizer will never decide to rewrite itself as a gold coin maximizer. The final result is just a derived product of my original code and the sensory experiences its received. Is that any more 'intelligent' than the toll taker?
I like to bet folks that AI won't happen within timeframe X. The problem then becomes defining AI happening. I wouldn't want them to point to the toll robot, and presumably they'd be equally miffed if we were slaves of the MechaPope and I was pointing out that its Twenty Commandments could be predicted given a knowledge of its source code.
Thinking on it, my knee jerk criteria is that I will admit that AI exists if the United States knowingly gives it the right to vote. (Obviously there's a window where AI is sentient but can't vote, but given the speed of the FOOM it'll probably pass quickly), or if the earth declares war (or the equivalent) on it. Its a pretty hard criteria to come up with.
What would yours be? Say we bet, you and I, on whether AI will happen in 50 years. What would you want me to accept as evidence that it had done so (keeping in mind that we are imagining you as motivated not by a desire to win the bet but a desire that the bet represent the truth)?
I would pick either some kind of programming ability, or the ability to learn a language like English (which I would bet implies the former if we're talking about what the design can do with some tweaks).