lifebiomedguru

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

I can't help but think that the reason why randomization appear to add information in the context of machine learning classifiers for the dichotomous case is that the actual evaluation of the algorithms is empirical, and, as such, use finite data. Two classes of non-demonic error always exist in the finite case. These are measurement error, and sampling error.

Multiattribute classifiers use individual measurements from many attributes. No empirical measurement is made without error; the more attributes that are measured, the more measurement error intrudes.

No finite sample is an exact sample of the entire population. To the measurement error for each attribute, we must add sampling error. Random selection of individuals for training set samples, and for test set samples, lead to data sets that are approximations of the sample. The sample is an approximation of the population.

Indeed, the entire finite population is but a sample of what the population could be, given time, populations change.

So, the generalizable performance (say, accuracy) of AI algorithms is hard to nail down precisely, unless it either is a perfect classifier (fusion = 0 -> planets vs. fusion = 1-> suns, for example, at a specific TIME).

I contend, therefore, that some of the 'improvement' observed due to randomization is actually overfit of the prediction model to the finite sample.

I'm in the field of bioinformatics, and in biology, genetics, genomics, proteomics, medicine, we always admit to measurement error.