There are a lot of metrics of how good a model is (i.e. adjusted R-squared, AIC, SIC).
Edit to add: Backcasting would be a good measure.
76 data points, after adjustments.
Hmmm. Personally, I think the AIC/SIC stuff is a hack.
I don't think statistics works in the N=76 regime. In Bayesian terms, the data is sufficient to justify only a minor update, so whatever conclusions you draw will be dominated by your choice of prior. It's interesting that it might be used in court, because it means that the lawyers will effectively be arguing about the justifications of statistical inference - or, if they are both Bayesians, how to choose a prior.
As a part of my job, I recently created an econometric model. My boss wants someone to look over the math before its submitted internally throughout the company. We have a modest amount of money set aside for someone to audit the process.
The model is an ARMA(2,1) with seasonality, trend, and a dummy variable. There's no heteroscedasticity or serial correlation, but the Ramsey Reset test suggests a more different model might work better.
I currently have the data in an eviews file, so you'd need to do zero data entry.
There's a small chance this will be used in court, but none of the liability will be transferred to you. There should be an emphasis placed on parsimony. You'd have to sign a confidentiality agreement.
If you're qualified to review this/suggest a marginally better model, then this would be an easy way for you to make bank in a couple hours time. If it goes well, there might be more work like this in the future.
Let me know if you're interested.