eli_sennesh comments on Debunking Fallacies in the Theory of AI Motivation - LessWrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (343)
A) I did say marginally.
B) It's a metaphor intended to convey the concept to people without the technical education to know or care where the diminishing returns line is going to be.
C) As a matter of fact, in sampling-based inference, computation time scales linearly with sample size: you're just running the same code n times with n different random parameter values. There will be diminishing returns to sample size once you've got a large enough n for relative frequencies in the sample to get within some percentage of the real probabilities, but actually adding more is a linearly-scaling cost.
The problem is that it conveys the concept in a very misleading way.
No, it does not. In sampling-based inference, the necessary computation time grows linearly with the demanded sample size, not exponentially. There may be diminishing returns to increasingly accurate probabilities, but that's a fact about your utility function rather than an exponential increase in necessary computational power.
This precise switch, from an exponential computational cost growth-curve to a linear one, is why sampling-based inference has given us a renaissance in Bayesian statistics.
This has nothing to do with utility functions.
Sample size is a linear function of the CPU time, but the accuracy of the estimates is NOT a linear function of sample size. In fact, there are huge diminishing returns to large sample sizes.
Ah, ok, fair enough on that one.