Here's a new worst-case scenario for the random predictor: Assume slightly more than half of the total weight of experts will predict correctly during each trial (there exist a large number of experts which are better than random, but not by much; over a large number of trials, the experts which perform worse than random are reduced to negligible weight)
The deterministic algorithm is then right almost all of the time, while the one with added randomness is wrong about half of the time.
Hell, I can create a better bound trivially: The number of mistakes made cannot be higher than the number of trials performed. (For a single algorithm that is right 59% of the time, combined with it's complement which is right 41% of the time, over 100 trials the lower bound is 2.41*(41 + log2(2)), or ~101 errors. Literally every system ever possible will have fewer than 101 errors in 100 binary trials.
Today's post, The Weighted Majority Algorithm was originally published on 12 November 2008. A summary (taken from the LW wiki):
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Worse Than Random, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.