You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

MrMind comments on Open thread, Jul. 04 - Jul. 10, 2016 - Less Wrong Discussion

4 Post author: MrMind 04 July 2016 07:02AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (80)

You are viewing a single comment's thread.

Comment author: dimensionx 06 July 2016 12:16:53PM 0 points [-]

AI which predicts the future based on an non-existing past. Imagine an algorithm that is able to predict the future in some of direction (social, industry, etc.), and assume that we have a necessary data for this. For example the algorithm creates a different probability model based on these data, modeling the future to 1 year in advance. Let's say what we’ll have created 20 models, including 10 models that have a true probability of 85%. What if on the basis of these 10 future variants, algorithm simulating new probability for 2nd year, then on the following 3 years and other. The philosophy of the system is that the algorithm lives in the present but have accessing data from the past to the simulation of the future, which is created by the analysis of the data has not yet come last. If he has a kind of infinite power, it is able to predict the future for thousands of years in advance, and have the experience even non-existents events.

So, are there any researches of this subject?

Comment author: ChristianKl 07 July 2016 09:26:42AM 0 points [-]

true probability

What does that phrase mean?

Comment author: Lumifer 06 July 2016 02:38:39PM 0 points [-]

on the basis of these 10 future variants, algorithm simulating new probability for 2nd year, then on the following 3 years and other.

That's called chaining the forecasts. This tends to break down after very few iterations because errors snowball and because tail events do happen.

Comment author: dimensionx 06 July 2016 04:20:39PM *  0 points [-]

Yes, it's true. It is very difficult to build a forecast of non-automated way. Apparently need to take care of the right frame algorithm and gradually increase it, such as a inventions tree or matrix, for example. Then add the probability, time and further, to increase the amount of data. But what do we when the forecast crumbles in the future? Theoretically, even we having enough data, it's possible to come across a large number of bifurcation points that's just create too much a parallel universes with different futures. What if it does not matter that the forecast is correct. Importance their infinite number, from which you can choose the right future, with the help of special algorithms or be prepared for multiple outcomes.

Comment author: ChristianKl 07 July 2016 09:36:38AM 0 points [-]

The right algorithm doesn't give you good results if the data which you have isn't good enough.

Comment author: dimensionx 07 July 2016 10:34:43AM 0 points [-]

What do you mean?

Comment author: ChristianKl 08 July 2016 03:20:59PM 0 points [-]

The amount of entropy that corresponds to real world information in the starting data vs. the predictions is at best the same but likely the prediction contains less information.

Comment author: MrMind 07 July 2016 07:04:16AM *  0 points [-]

Another possibility is that after n years the algorithm smoothes out the probability of all the possible futures so that they are equally likely...
The problem is not only computational: unless there are some strong pruning heuristics, the value of predicting the far future decays rapidly, since the probability mass (which is conserved) becomes diluted between more and more branches.