All of dimensionx's Comments + Replies

What do you mean?

[This comment is no longer endorsed by its author]Reply
0ChristianKl
The amount of entropy that corresponds to real world information in the starting data vs. the predictions is at best the same but likely the prediction contains less information.

1

[This comment is no longer endorsed by its author]Reply
0Lumifer
Your mathematical models are supposed to reflect real-life features of the data. All data is not the same and the same models are not appropriate for all data.

1

[This comment is no longer endorsed by its author]Reply
0Lumifer
So what kind of metrics are you interested in forecasting? Macroeconomic ones (GDP, inflation, etc.)? Industry-specific things? Interest rates?

1

[This comment is no longer endorsed by its author]Reply
0ChristianKl
The right algorithm doesn't give you good results if the data which you have isn't good enough.
0MrMind
Another possibility is that after n years the algorithm smoothes out the probability of all the possible futures so that they are equally likely... The problem is not only computational: unless there are some strong pruning heuristics, the value of predicting the far future decays rapidly, since the probability mass (which is conserved) becomes diluted between more and more branches.

1

[This comment is no longer endorsed by its author]Reply
0[anonymous]
Theoretical example. Somewhere in space flying ASI in the form of a cloud with nanobots that continuously simulates the future. He does this for to know all of the risks and opportunities of the event in advance. So, he is able to conduct more effective research, for example to avoid loss of a time. Of course, if the future of modeling uses less resources than saved. There is only one problem - its sensors indicate that at a distance of thousands parsecs no other ASI. But there is a probability (0,0...1%) that another ASI will suddenly appear next to us using the teleport about which the first intellect is nothing known. The calculation shows that the probability of 0.0...1% appearance and 5% that other ASI will destroy the first algorithm. That will selects the first algorithm? Waste of resources to solve the problem with low probability or the probability the destruction. On the whole the algorithm is able to create a a lot of markers, which he will have to check in real world. And these markers will be correct probabilistic models all the time. So, you can build a model in which the highest probability density is verified most densely markers on the basis of genetic algorithms.
0ChristianKl
What does that phrase mean?
0Lumifer
That's called chaining the forecasts. This tends to break down after very few iterations because errors snowball and because tail events do happen.

1

[This comment is no longer endorsed by its author]Reply
0Lumifer
Do you mean financial markets?