(Having said which, I believe there's some evidence that even a not-all-that-good human player armed with multiple computers running different programs can be scarily effective too.)
If you're thinking about the same thing I am, the player was "not-all-that-good" at chess, but knew a lot about chess programs and their different relative weaknesses and strengths.
Hypothetically, I wonder if that approach could be constructively imitated by a computer. A meta-chess program, dividing it's computational resources between several subprograms, and combining their input to play better than the subprograms would if they had the full computational resources.
I think we are indeed thinking of the same instance. And yes, it would be interesting to try getting a computer to play that way.
Here's a nice exploitation of a similar idea: The Fastest and Shortest Algorithm for All Well-Defined Problems; see also the discussion at Hacker News, where in particular you might want to read the comment from me that explains roughly what's going on and the comment from Eliezer that explains one way in which Hutter's description of his algorithm claims more than it really delivers. None the less, it's a very neat idea.
Some of you have been trying to raise money for the Singularity Institute, and I have an idea that may help.
The idea is to hold public competitions on LessWrong with money going to charity. Agree to a game and an amount of money, then have each player designate a charity. After the game, each player gives the agreed upon amount to the charity designated by the winner.1 It’s a bit like celebrity Jeopardy.2
Play the game here on LessWrong or post a record of it.3 That will spread awareness of the charities and encourage others to emulate you.
The game can be as simple as a wager or something more involved:
We’ve already played few games on LessWrong, more on those in a moment.
First, I have a confession to make: I don’t really care how much money gets donated to the Singularity Institute, nor am I trying to drum up money for some other cause. I mainly want you all playing games.
Not just playing them, of course. Playing them here in front of the rest of LessWrong and analyzing the moves in terms of the “personal art of rationality.”
We need more approaches to improvement. Even in Eliezer_Yudkowsky’s Bayesian Conspiracy fictional series/manifesto, many other schools of thought (called, for dramatic effect, “conspiracies”) were present. As I recall, the “Competitive Conspiracy” was mentioned frequently, but there are other reasons for choosing to start with games.
Games are fun, of course. They also deal with the “personal art” of getting familiar with your own brain, which I think has been underrepresented on LW. I do believe there are certain important things we can’t learn properly just by reading, arguing, and doing math (valuable as those techniques are). Games are an easy, intuitive first step to filling that gap.
We’ve already had a few games here. Warrigal held an Aumann’s agreement game competition. I created Pract and played it with wedrifid. Neither one has caught on as a LessWrong pastime, but the comments revealed that people here know many interesting games.
And there’s the donation hook. Some of you believe the world is at stake, so that’s a nice motivator.
Of course it’s not always the case that you have exactly one winner. I suggest that if there’s no winner each player give to their own charity, and if there are multiple winners each player splits up their donation evenly among the charities of the winners. ↩
This is, of course, not a new idea. I’m just suggesting that we adopt it and make it a common practice on LessWrong. ↩
Unless it contains some remarkable insight, I wouldn’t make it a top-level post. The comment area here or in one of the open threads would be good. ↩
I see mutual consent as an important element of games. ↩