Oops. I realize now that I was confusing the definition of belief used here with the definition used for the game (a principled to-do list), so the idea isn't as applicable as I originally thought, but I'll try to answer you anyway.
As a player you can change your character's beliefs almost as often as you like and the game rewards you for tailoring them to the context of each scene you enact, with different rewards depending on whether you act in accordance with them or undermine them (this encourages you to have conflicting beliefs, which increases the drama of the shared story). Then, between game sessions, all players involved nominate those beliefs you appear never to undermine for promotion to trait-hood (indicating you've fulfilled your character's goals and they no longer need testing), and those you appear always to undermine for changing. Traits often give game mechanical bonuses and penalties, but can take almost a full story arc of deliberate undermining before being nominated for change.
Conflict in the game is handled in a very specific way. You describe your intent (what you want your character to achieve in the story) and how it is achieved, the GM declares the skill rolls or other game mechanics required and sets the stakes (consequences for failure). If the GM and none of the players can think of an interesting direction a failed roll could take the story in then no roll is made, you get what you wanted and the group moves on to the next, more interesting, conflict. Otherwise, the stakes are negotiated and you choose whether to roll or change your mind. Once the roll is made it's results are irreversible within the fiction.
To a large degree it is up to the GM to create interesting and painful stakes with which to challenge your beliefs, so your mileage will vary.
Ah.
Would it be fair to summarize that as "it makes you update your beliefs insofar as it's made explicit that your character has new goals, and helps you practice changing your mind?"
Today's post, The Martial Art of Rationality was originally published on November 22, 2006. A summary (taken from the LW wiki):
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. It is the first post in the series; the introductory post was here, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.