Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
Venkat00

I've wondered about, and even modeled versions of the fixed horizon IPD in the past. I concluded that so long as the finite horizon number is sufficiently large in the context of the application (100 is large for prison scenarios, tiny for other applications), a proper discounted accounting of future payoffs will restore TFT as an ESS. Axelrod used discounting schemes in various ways in his book(s).

The undiscounted case will always collapse. Recursive collapse to defect is actually rational and a good model for some situations, but you are right, in other situations it is both silly and not what people do, so it is the wrong model. If there is a finite horizon case where discounting is not appropriate, then I'd analyze it differently. To stop the recursive collapse, let the players optimize over possible symmetric reasoning futures...

Venkat20

Eliezer:

This is NOT premature! You just saved yourself at least one reader you were about to lose (me). I (and I suspect many others) have not been among the most regular readers of OB because it was frankly not clear to me whether you had anything new to say, or whether you were yet another clever, but ultimately insubstantial "lumper" polymath-wannabe (to use Howard Gardner's term) who ranged plausibly over too many subjects. Your 'physics' series especially, almost made me unsubscribe from this blog (I'll be honest: though you raised some interesting thoughts there, that series did NOT impress me).

But with this post, for what it is worth, you've FINALLY seriously engaged my attention. I'll be reading your follow-up to this thread of thinking very carefully. I too, started my research career fascinated by AI (wrote an ELIZA clone in high school). Unlike you, my more mature undergrad reaction to the field was not that AI was hard, but that it was simply an inappropriate computer science framing of an extremely hard problem in the philosophy of mind. I think you would agree with this statement. Since philosophy didn't seem like a money-making career, my own reaction was to steer towards a fascinating field that neighbors AI, control theory (which CS people familiar with history will probably think of as "stuff beyond subsymbolic AI, or modern analog computing"). Back when I started grad school in control theory (1997), I was of the opinion that it had more to say about the philosophy problem than AI. My opinion grew more nuanced through my PhD and postdoc, and today I am a somewhat omnivorous decision-science guy with a core philosophical world-view based on control theory, but stealing freely from AI, operations research and statistics to fuel my thinking both on the practical pay-the-bills problems I work on, as well as the philosophy problem underlying AI.

Oddly enough, I too have been drafting my first premature essay on AI, corresponding to yours, which I have tentatively titled, "Moving Goalposts are Good for AI." I should post the thing in the next week or two.

I suspect you won't agree with my conclusions though :)

Venkat10

My favorite example of both the controversies and the settled: the string theory controversies and Roger Penrose's careful treatment of what we do know in "The Road to Reality."

Of course, Popper and Feyerabend would have us believe that nothing is ever settled (and I tend to agree), but even in Popperian mode, the theory that displaces tends to subsume and refine at the asymptotes rather than invalidate directly.

I do not keep up with science news, but for different reasons: the sheer fire-hose volume of it. Especially in the gleeful stamp-collecting world of the biological sciences. I figure if something is important enough, it will eventually get to me after layers of filtering.

Venkat10

Definitely one of the most useful posts I've seen on overcoming bias. I shall be referring back to this list often. I am surprised though, that you did not reference that incisive philosopher, Humpty Dumpty, who had views about a word meaning exactly what he wanted it to mean :) While I haven't thought through the taxonomy of failures through quite as thoroughly, I spent a fair amount of time figuring out the uses of the words 'strategy' and 'tactics' in collaboration with a philosopher of language, and wondering about the motivated bias that enters into deliberately making these ambiguous words more fuzzy than they need to be. The result was a piece on the semantics of decision-making words. Somewhere in this dictionary, there is also probably a need to connect up with notions of conceptual metaphor (Lakoff) and the Sapir-Whorf hypothesis. It'll probably come to me in a day or two. Something connecting intent, connotation, denotation... hmm.

Venkat

Venkat40

What you are talking about in terms of Solmonoff induction is usually called algorithmic information theory and the shortest-program-to-produce-a-bit-string is usually called Kolmogorov-Chaitin information. I am sure you know this. Which begs the question, why didn't you mention this? I agree, it is the neatest way to think about Occam's razor. I am not sure why some are raising PAC theory and VC-dimension. I don't quite see how they illuminate Occam. Minimalist inductive learning is hardly the simplest "explanation" in the Occam sense, and is actually closer to Shannon entropy in spirit, in being more of a raw measure. Gregory Chaitin's 'Meta Math: The Search for Omega', which I did a review summary of is a pretty neat look at this stuff.

Venkat20

That was kinda hilarious. I like your reversal test to detect content-free tautologies. Since I am working right now on a piece of AI-political-fiction (involving voting rights for artificial agents and questions that raises), I was thrown for a moment, but then tuned in to what YOU were talking about.

The 'Yes, Minister' and 'Yes, Prime Minister' series is full of extended pieces of such content-free dialog.

More seriously though, this is a bit of a strawman attack on the word 'democracy' being used as decoration/group dynamics cueing. You kinda blind-sided this guy, and I suspect he'd have a better answer if he had time to think. There is SOME content even to such a woolly-headed sentiment. Any large group (including large research teams) has conflict, and there is a spectrum of conflict resolution ranging from dictatorial imposition to democracy through to consensus.

Whether or not the formal scaffolding is present, an activity as complex as research CANNOT work unless the conflict resolution mechanisms are closer to the democracy/consensus end of the spectrum. Dictators can whip people's muscles into obedience, maybe even their lower-end skills ("do this arithmetic or DIE!"), but when you want to engage the creativity of a gang of PhDs, it is not going to work until there is a mechanism for their dissent to be heard and addressed. This means making the group itself representative (the 'multinational' part) automatically brings in the spirit if not the form of democratic discourses. So yes, if there are autocentric cultural biases today's AI researchers bring to the game, making the funding and execution multinational would help. Having worked on AI research as an intern in India 12 years ago, and working today in related fields here in the US, I can't say I see any such biases in this particular field, but perhaps in other fields, making up multinational, internationally-funded research teams would actually help.

On the flip side, you can have all the mechanisms and still allow dictatorial intent to prevail. My modest take on ruining democratic meetings run on Robert's Rules:

The 15 laws of Meeting Power