That's actually one of my biggest problems. I think my ability to model other people is way below average.
Well, what sort of mistakes has your model made? Is it limited to predicting how well you'll be understood, or are their other, specific types of predictions that your mental model consistently gives the wrong answer on?
Today's post, Illusion of Transparency: Why No One Understands You was originally published on 20 October 2007. A summary (taken from the LW wiki):
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Pascal's Mugging: Tiny Probabilities of Vast Utilities, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.