As an outsider I kind of get the impression that there is a bit of looking-under-the-streetlamp syndrome going on here where world-modelling is assumed to be the most/only important feature because that's what we can currently do well. I got the same impression seeing Jeff Hawkins speaking at a conference recently.
I'm pretty sure that we suck at prediction - compared to evaluation and tree-pruining. Prediction is where our machines need to improve the most.
Agreed. And search is not the same problem as prediction, you can have a big search problem even when evaluating/predicting any single point is straightforward.
search is not the same problem as prediction
It is when what you are predicting is the results of a search. Prediction covers searching.
It is interesting that his view of AI is apparently that of a prediction tool [...] rather than of a world optimizer.
If you can predict well enough, you can pass the Turing test - with a little training data.
If we're talking reference classes, I would cite the example that the first hominid species to develop human-level intelligence took over the world.
At an object level, if AI research goes secret at some point, it seems unlikely, though not impossible, that if team A develops human-level AGI, then team B will develop super-human-level AGI before team A does. If the research is fully public (which seems dubious but again isn't impossible), then these advantages would be less pronounced, and it might well be that many teams could be in close competition even after human-level AGI. Still, because human-level AGI can be scaled to run very quickly, it seems likely it could bootstrap itself to stay in the lead.
If we're talking reference classes, I would cite the example that the first hominid species to develop human-level intelligence took over the world.
Note that humans haven't "taken over the world" in many senses of the phrase. We are massively outnumbered and out-massed by our own symbionts - and by other creatures.
Machine intelligence probably won't be a "secret" technology for long - due to the economic pressure to embed it.
While its true that things will go faster in the future, that applies about equally to all players - in a phenomenon commonly known as "internet time".
Yes, let's engage in reference class tennis instead of thinking about object level features.
Doesn't someone have to hit the ball back for it to be "tennis"? If anyone does so, we can then compare reference classes - and see who has the better set. Are you suggesting this sort of thing is not productive? On what grounds?
As has been pointed out numerious times on lesswrong, history is not a very good guide for dealing with AI since it is likely to be a singular (if you'll excuse the pun) event in history. Perhaps the only other thing it can be compared with is life itself, and we currently have no information about how it arose (did the first self-replicating molecule lead to all life as we know it? Or were there many competing forms of life, one of which eventually won?)
As has been pointed out numerious times on lesswrong, history is not a very good guide for dealing with AI since it is likely to be a singular (if you'll excuse the pun) event in history. Perhaps the only other thing it can be compared with is life itself [...]
What, a new thinking technology? You can't be serious.
In the opening sentence I used the (perhaps unwise) abbreviation "artificial general intelligence (AI)" because I meant AGI throughout the piece, but I wanted to be able to say just "AI" for convenience. Maybe I should have said "AGI" instead.
The first OS didn't take over the world. The first search engine didn't take over the world. The first government didn't take over the world. The first agent of some type taking over the world is dramatic - but there's no good reason to think that it will happen. History better supports models where pioneers typically get their lunch eaten by bigger fish coming up from behind them.
whoever builds the first AI can take over the world, which makes building AI the ultimate arms race.
As the Wikipedians often say, "citation needed". The first "AI" was built decades ago. It evidently failed to "take over the world". Possibly someday a machine will take over the world - but it may not be the first one built.
I didn't buy the alleged advantage of a noise free environment. We've known since von-Neumann's paper titled:
PROBABILISTIC LOGICS AND THE SYNTHESIS OF RELIABLE ORGANISMS FROM UNRELIABLE COMPONENTS
...that you can use unreliable computing components to perform reliable computation - with whatever level of precision and reliability that you like.
Plus the costs of attaining global synchrony and determinism are large and massively limit the performance of modern CPU cores. Parallel systems are the only way to attain large computing capacities - and you can't guarantee every component in a large parallel system will behave deterministically. So: most of the future is likely to lie with asynchronous systems and hardware indeterminism, rather contrary to Yudkowsky's claims.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Could you elaborate on the connections between image recognition / interpretation and prediction? For this reply, it's fine to be only roughly accurate. (In case an inability to be sufficiently rigorous is what prevented you from sketching the connection.)
...naively, I think of intelligence as, say, an ability to identify and solve problems. Is LeCun saying perhaps that this is equal to prediction, or not as important as prediction, or that he's more interested in working on the latter?
Here is one of my efforts to explain the links: Machine Forecasting.