AN EMPIRICAL TEST OF INDEPENDENCE BETWEEN RANDOM VECTORS
Let be k random vectors with joint probability measure m (which is supposed to be absolutely continuous with respect to a given measure), and marginal ones In order to test the hypothesis of independence: between the random vectors one extends, by means of an equivalence relation on the set of the negative orthants, a test of independence between the components of a random vector X, as introduced by the authors in [7]. Some examples and simulations show how to conduct this test and illustrate its power empirically. An empirical comparison of this test with three competitors including the likelihood ratio test, is given using multivariate normal and lognormal distributions.
independence, distribution-free statistic, cumulative distribution functions, goodness-of-fit test, negative orthant.