A New look at Statistical Invariance
It is well know that maximum likelihood estimation is invariant to the parameterization of the model. When the mapping from one parametrization to another is injective this leads to an equivalence of the likelihood and in a Bayesian context an equivalence of the posterior for carefully chosen prior distributions. The majority of modern statistical procedures requires numerically sampling the posterior or numerically optimizing the likelihood. In the context of a Gaussian process model we show that the parameterization of the model is in fact critical to achieving meaningful results and ensuring convergence of the optimizer or the Markov chain. The lack of numerical invariance is caused by the behaviour of the gradient near the optimum. In such cases the behaviour of the gradient is crucial in deciding on an appropriate parameterization of the model.