MrMind comments on Open thread, Jul. 04 - Jul. 10, 2016 - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (80)
AI which predicts the future based on an non-existing past. Imagine an algorithm that is able to predict the future in some of direction (social, industry, etc.), and assume that we have a necessary data for this. For example the algorithm creates a different probability model based on these data, modeling the future to 1 year in advance. Let's say what we’ll have created 20 models, including 10 models that have a true probability of 85%. What if on the basis of these 10 future variants, algorithm simulating new probability for 2nd year, then on the following 3 years and other. The philosophy of the system is that the algorithm lives in the present but have accessing data from the past to the simulation of the future, which is created by the analysis of the data has not yet come last. If he has a kind of infinite power, it is able to predict the future for thousands of years in advance, and have the experience even non-existents events.
So, are there any researches of this subject?
What does that phrase mean?
That's called chaining the forecasts. This tends to break down after very few iterations because errors snowball and because tail events do happen.
Yes, it's true. It is very difficult to build a forecast of non-automated way. Apparently need to take care of the right frame algorithm and gradually increase it, such as a inventions tree or matrix, for example. Then add the probability, time and further, to increase the amount of data. But what do we when the forecast crumbles in the future? Theoretically, even we having enough data, it's possible to come across a large number of bifurcation points that's just create too much a parallel universes with different futures. What if it does not matter that the forecast is correct. Importance their infinite number, from which you can choose the right future, with the help of special algorithms or be prepared for multiple outcomes.
The right algorithm doesn't give you good results if the data which you have isn't good enough.
What do you mean?
The amount of entropy that corresponds to real world information in the starting data vs. the predictions is at best the same but likely the prediction contains less information.
Another possibility is that after n years the algorithm smoothes out the probability of all the possible futures so that they are equally likely...
The problem is not only computational: unless there are some strong pruning heuristics, the value of predicting the far future decays rapidly, since the probability mass (which is conserved) becomes diluted between more and more branches.