This question exists in the awkward space between "things undergrads google for homework" and "things on the cutting edge," so google isn't being super helpful.
I have a number I want a computer to estimate. Right now I have two regression models and an insider methodology. The former can be used to create two normal curves. The latter creates a point estimate only, but I can back into a confidence interval/normal curve with an acceptable amount of arbitrary hand-waving. If necessary, this could be conceived of as a prior.
How can I automatically weight the three curves into a single point estimate? I vaguely remember something from an econometrics class about weighting forecasts in a way that minimized total standard error, but I tried to work the math out myself and I didn’t know how to deal with the covariances of the forecasts. Can I simply assume the forecast covariances are zero?
This seems like a good place to use Bayes’ law, but I don't know how to formally set it up.
Edit to Add: Bayesian statistics is still new to me, so forgive me for being a bit dense. Here's my understanding of the methodology right now.
What exactly is D in this scenario?
D is your data.
First, I misspoke - you don't want the likelihood, you want the marginal distribution of the data. See http://www-personal.umich.edu/~bnyhan/montgomery-nyhan-bma.pdf especially the first 5 or so pages.
Second, your likelihood will look different from what you think anyway. Assuming normal distributions and only one covariate x, letting y denote the response with n total observations, it will be:
Where sigma is your standard error, NOT the forecast standard error. Your likelihoods might look different depending on the particular models you are using. Multiple regression, for example, will have more covariates and thus more regression parameters in the mean function.