EXTREMAL VARIANCE AND ITS GENERALIZATION
A question of much practical importance is how to reduce n disparate measures of an entity to a composite measure Y, of the form where constant is the contributionof per unit of to Y, and is its total contribution. The usual solution to this problem is the minimization of the variance of random variable Y with respect to constants subject to an equality constraint. In this paper, we examine this minimum variance estimator for its linkage with the probability density function concerned, in order to make it useful when the expected value of Y is strictly positive or negative. In so doing, we have found that, subject to a usual equality constraint, can have minima, maxima and saddle points, and hence that our treatment of minimum variance estimators as a paradigm in statistics perhaps needs rethinking. More importantly, we have found that, although extremizing the variance of a composite measure of an entity is equivalent to extremizing the probability density function of a normal random variable with respect to all its variables, this is not so of non-normal random variables, for which,we need to extremize each corresponding probability density function with respect to all its variables. We illustrate this idea with the help of a set of data on the growth of Tasmanian abalone Haliotis rubra for log-normal and gamma distributions, as well as for normal distribution.
minimum variance, generalization, extremization, measures, composite measure.