VoiceOfRa comments on Debunking Fallacies in the Theory of AI Motivation - Less Wrong

8 Post author: Richard_Loosemore 05 May 2015 02:46AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (343)

You are viewing a single comment's thread. Show more comments above.

Comment author: VoiceOfRa 18 May 2015 12:06:48AM 3 points [-]

Since additional computational power can always be put towards more samples on the margin, you can always get your inferred estimates marginally closer to the real probabilities just by adding compute time.

By adding exponentially more time.

Computational complexity can't simply be waived away by saying "add more time/memory".

Comment author: [deleted] 18 May 2015 07:38:15PM -1 points [-]

A) I did say marginally.

B) It's a metaphor intended to convey the concept to people without the technical education to know or care where the diminishing returns line is going to be.

C) As a matter of fact, in sampling-based inference, computation time scales linearly with sample size: you're just running the same code n times with n different random parameter values. There will be diminishing returns to sample size once you've got a large enough n for relative frequencies in the sample to get within some percentage of the real probabilities, but actually adding more is a linearly-scaling cost.

Comment author: VoiceOfRa 19 May 2015 02:35:17AM 0 points [-]

It's a metaphor intended to convey the concept to people without the technical education to know or care where the diminishing returns line is going to be.

The problem is that it conveys the concept in a very misleading way.

Comment author: [deleted] 19 May 2015 03:59:17PM -1 points [-]

No, it does not. In sampling-based inference, the necessary computation time grows linearly with the demanded sample size, not exponentially. There may be diminishing returns to increasingly accurate probabilities, but that's a fact about your utility function rather than an exponential increase in necessary computational power.

This precise switch, from an exponential computational cost growth-curve to a linear one, is why sampling-based inference has given us a renaissance in Bayesian statistics.

Comment author: Lumifer 19 May 2015 04:28:40PM 2 points [-]

There may be diminishing returns to increasingly accurate probabilities, but that's a fact about your utility function

This has nothing to do with utility functions.

Sample size is a linear function of the CPU time, but the accuracy of the estimates is NOT a linear function of sample size. In fact, there are huge diminishing returns to large sample sizes.

Comment author: [deleted] 19 May 2015 05:03:19PM 0 points [-]

Ah, ok, fair enough on that one.