If you obtain a posterior probability $P(a\mid D_1)$ and $P(a\mid D_2)$ from independent experiments then:

$$ P(a \mid D_1 \cap D_2) = P(a \mid D_1) \, P(a \mid D_2) $$

On the use of priors

An alternative way of combining results is to use the posterior probability distribution of one 'experiment' as the prior probability for the second experiment instead of performing both independently and combining their results afterwards. They are mathematically interchangeable operations.

Least square

The least square is a special case of the MAP which relies on two assumptions:

$$ \mathcal{L}(y_i \mid f(\theta), \theta, x_i, \sigma_i) = \frac{1}{\sqrt{2\pi} \sigma_i} \exp{\left(-\frac{\left( y_i - f(x_i;\theta) \right)^2}{2\sigma_i^2}\right)} $$

$$ \chi^2(\theta) = \sum_i^N \frac{\left( y_i - f(x_i;\theta) \right)^2}{\sigma_i^2} $$

$$ \widehat{\theta}{\rm MLE} = \arg \min{\substack \theta} \chi^2(\theta) \equiv \left[ \frac{\partial \chi^2}{\partial \theta} \right]_{\theta = \widehat{\theta}} = 0 $$

$$ \sigma^2_{\widehat{\theta}} = \left[ \left( \frac{1}{2} \frac{\partial^2 \chi^2}{\partial \theta^2} \right)^{-1} \right]_{\widehat{\theta}} $$

Linear least square

$$ f(x_i; \theta) = \sum_j^M A_j(x_i) \theta_j $$

Example: $f(x_i;m,b)=mx+b$