Concepts
Similar pages
Similarity |
Page |
Snapshot |
| 120 |
convenience let us set There are a number of ways of looking at Ki
...Typically the weight Ki N,r,n,R is estimated from a contingency table in which N is not the total number of documents in the system but instead is some subset specifically chosen to enable Ki to be estimated
...The index terms are not independent Although it may be mathematically convenient to assume that the index terms are independent it by no means follows that it is realistic to do so
... |
| 118 |
Theorem is the best way of getting at it
...P x wi P x 1 wi P x 2 wi
...Later I shall show how this stringent assumption may be relaxed
...Let us now take the simplified form of P x wi and work out what the decision rule will look like
...pi Prob xi 1 w 1 qi Prob xi 1 w 2
...In words pi qi is the probability that if the document is relevant non relevant that the i th index term will be present
...To appreciate how these expressions work,the reader should check that P 0,1,1,0,0,1 w 1 1 p 1 p 2 p 3 1 p 4 1 p 5 p 6
...where the constants ai,bi and e are obvious
... |
| 128 |
objected to on the same grounds that one might object to the probability of Newton s Second Law of Motion being the case
...To approach the problem in this way would be useless unless one believed that for many index terms the distribution over the relevant documents is different from that over the non relevant documents
...The elaboration in terms of ranking rather than just discrimination is trivial:the cut off set by the constant in g x is gradually relaxed thereby increasing the number of documents retrieved or assigned to the relevant category
...If one is prepared to let the user set the cut off after retrieval has taken place then the need for a theory about cut off disappears
... |
| 137 |
the different contributions made to the measure by the different cells
...Discrimination gain hypothesis In the derivation above I have made the assumption of independence or dependence in a straightforward way
...P xi,xj P xi,xj w 1 P w 1 P xi,xi w 2 P w 2 P xi P xj [P xi w 1 P w 1 P xi,w 2 P w 2][P xj w 1 P w 1 P xj,w 2 P w 2]If we assume conditional independence on both w 1 and w 2 then P xi,xj P xi,w 1 P xj,w 1 P w 1 P xi w 2 P xj w 2 P w 2 For unconditional independence as well,we must have P xi,xj P xi P xj This will only happen when P w 1 0 or P w 2 0,or P xi w 1 P xi w 2,or P xj w 1 P xj w 2,or in words,when at least one of the index terms is useless at discriminating relevant from non relevant documents
...Kendall and Stuart [26]define a partial correlation coefficient for any two distributions by |
| 140 |
derives from the work of Yu and his collaborators [28,29]...According to Doyle [32]p
...The model in this chapter also connects with two other ideas in earlier research
...or in words,for any document the probability of relevance is inversely proportional the probability with which it will occur on a random basis
... |
| 115 |
Basic probabilistic model Since we are assuming that each document is described by the presence absence of index terms any document can be represented by a binary vector,x x 1,x 2,...where xi 0 or 1 indicates absence or presence of the ith index term
...w 1 document is relevant w 2 document is non relevant
...The theory that follows is at first rather abstract,the reader is asked to bear with it,since we soon return to the nuts and bolts of retrieval
...So,in terms of these symbols,what we wish to calculate for each document is P w 1 x and perhaps P w 2 x so that we may decide which is relevant and which is non relevant
...Here P wi is the prior probability of relevance i 1 or non relevance i 2,P x wi is proportional to what is commonly known as the likelihood of relevance or non relevance given x;in the continuous case this would be a density function and we would write p x wi
...which is the probability of observing x on a random basis given that it may be either relevant or non relevant
... |
| 133 |
3
...It must be emphasised that in the non linear case the estimation of the parameters for g x will ideally involve a different MST for each of P x w 1 and P x w 2
...There is a choice of how one would implement the model for g x depending on whether one is interested in setting the cut off a prior or a posteriori
...If one assumes that the cut off is set a posteriori then we can rank the documents according to P w 1 x and leave the user to decide when he has seen enough
...to calculate estimate the probability of relevance for each document x
... |
| 114 |
the system to its user will be the best that is obtainable on the basis of those data
...Of course this principle raises many questions as to the acceptability of the assumptions
...The probability ranking principle assumes that we can calculate P relevance document,not only that,it assumes that we can do it accurately
...So returning now to the immediate problem which is to calculate,or estimate,P relevance document
... |
| 29 |
subsets differing in the extent to which they are about a word w then the distribution of w can be described by a mixture of two Poisson distributions
...here p 1 is the probability of a random document belonging to one of the subsets and x 1 and x 2 are the mean occurrences in the two classes
...Although Harter [31]uses function in his wording of this assumption,I think measure would have been more appropriate
...assumption 1 we can calculate the probability of relevance for any document from one of these classes
...that is used to make the decision whether to assign an index term w that occurs k times in a document
...Finally,although tests have shown that this model assigns sensible index terms,it has not been tested from the point of view of its effectiveness in retrieval
...Discrimination and or representation There are two conflicting ways of looking at the problem of characterising documents for retrieval
... |
|
|