Similar concepts
Pages with this concept
Similarity |
Page |
Snapshot |
| 136 |
probability functions we can write the information radius as follows:The interesting interpretation of the information radius that I referred to above is illustrated most easily in terms of continuous probability functions
...R u 1,u 2 v uI u 1 v vI u 2 v where I u i v measures the expectation on u i of the information in favour of rejecting v for u i given by making an observation;it may be regarded as the information gained from being told to reject v in favour of u i
...thereby removing the arbitrary v
...v u u 1 v u 2 that is,an average of the two distributions to be discriminated
...p x p x w 1 P w 1 p x w 2 P w 2 defined over the entire collection without regard to relevance
...There is one technical problem associated with the use of the information radius,or any other discrimination measure based on all four cells of the contingency table,which is rather difficult to resolve
... |
| 42 |
nice property of being invariant under one to one transformations of the co ordinates
...A function very similar to the expected mutual information measure was suggested by Jardine and Sibson [2]specifically to measure dissimilarity between two classes of objects
...Here u and v are positive weights adding to unit
...P x P x w 1 P w 1 P x w 2 P w 2 x 0,1 P x wi P x wi P x i 1,2 we recover the expected mutual information measure I x,wi
... |
| 138 |
where [[rho]]...[[rho]]X,Y W 0 which implies using the expression for the partial correlation that [[rho]]X,Y [[rho]]X,W [[rho]]Y,W Since [[rho]]X,Y <1,[[rho]]X,W <1,[[rho]]Y,W <1 this in turn implies that under the hypothesis of conditional independence [[rho]]X,Y <[[rho]]X,W or [[rho]]Y,W Hence if W is a random variable representing relevance then thecorrelation between it and either index term is greater than the correlation between the index terms
...Qualitatively I shall try and generalise this to functions other than correlation coefficients,Linfott [27]defines a type of informational correlation measure by rij 1 exp 2 I xi,xj [1 2]0 <rij <1 or where I xi,xj is the now familiar expected mutual information measure
...I xi,xj <I xi,W or I xj,W,where I
...Discrimination Gain Hypothesis:Under the hypothesis ofconditional independence the statistical information contained in oneindex term about another is less than the information contained ineither index term about relevance
... |
|
|