Statistics of user trial results

We now have results from our user trials showing how effective sux0r may be in filtering items from journal table of contents RSS feeds that are relevant to a user’s research interests.

Quick reminder of how we ran the trials: 20 users had access to sux0r for 6 weeks to train the analyser in what they found interesting and not interesting. We then barred access for 4 weeks but continued to aggregate feeds and filter them based on that training. Then we invited the users to look at the results of the filtering: two feeds from sux0r; one aggregating information about journal articles that had been published while the users were barred that sux0r predicted the user would find relevant; the other feed had information about the rest of the articles, the ones that sux0r predicted the user wouldn’t find relevant. We had our users look through both feeds and tell us whether the articles really were relevant to their research interests. We lost two triallists and so have data on 18, you can see this data as a web page (or get the spreadsheet if you prefer).

The initial data needs a little explanation: The first columns (in yellow) relate to the number of items used in the initial six weeks to train the Bayesian analyser in what what was relevant to the users research interests, what wasn’t, and the total number of items used in training. The “Additional docs” column relates to information added that didn’t come from the RSS feeds: was asked users to provide some documents that were relevant to their research interested for training in order to make up for the fact that in a fairly short trial period the number of items published that were relevant may be low.

The next set of columns (in green) relate to the feed of items aggregated after the training (while the users had no access) that were predicted to match the user research interests, showing the number of items of interest in that feed, the total number of items in that feed and the proportion of items in the feed that were interesting. The next three columns (in red) do exactly the same for the feed of items that were predicted not to be relevant.

For a quick overview of the results, here’s a chart of the fraction of interesting items in both feeds:

You need to be careful interpreting this chart. It hides some things, for example, the data point showing that the fraction of interesting items in one of the feeds was 1 (i.e. the feed of interesting items did indeed only have interesting items in it) hides the fact that this feed only had 2 items in it; the user found 9 items overall to be relevant to their research interest, 7 of them were in the wrong feed. Perhaps that’s not so good.

So, did it work? Well, one way of rephrasing that question is to ask whether the feed that was supposed to be relevant (the “interesting feed”) did indeed contain more items relevant to the users research interests than than would otherwise have been the case. That is, is the proportion of interesting items in the interesting feed higher than the proportion of interesting items in the two feeds combined. The answer in all but one case is yes; typically by a factor of between two and three. (The exception is a feed which achieved similar success in getting it wrong. We don’t know what happened here.)

Also we can look at the false negatives, i.e. the number of items that really were of relevance to the user’s interests that were in the feed that was predicted not to be interesting. The chart above shows quite nicely that after using about 150 items for training this was very low.

What about some statistics? It’s worth checking whether the increase in concentration of items related to a user’s research interest as a result of filtering is statistically significant. We used a two sample Z test to compare the difference in the proportion of interesting items in the two feeds to the magnitude of difference that could be expected to happen as the result of chance:
.

I have some reservations about this because of the small number of “interesting” items found in the feed which should be uninteresting when the filtering works–this means that one of the assumptions of the Z-test might not be valid when the filtering is working best–but any value of Z above 3 cannot be reasonably expected to have happened by chance.

Conclusion: for users who used more than about 150 items in training the filtering produces a statistically significant improvement in the number of items in the feed that were relevant to the user’s research interests without filtering out a large number of items that would have been of interest. Next post: were the users happy with these results?

Advertisements

2 Comments

Filed under trialling

2 responses to “Statistics of user trial results

  1. Pingback: BayesFF: Final post « Bayesian Feed Filter

  2. Pingback: Conducting a User Trial « Bayesian Feed Filter