Tag Archives: progressPosts

How To Use Bayesian Feed Filter

I have created 5 screen casts showing users how to use the Bayesian Feed Filter.

  1. How to Register an account on Bayesian Feed Filter http://screenr.com/WkA
    • Go to http://icbl.macs.hw.ac.uk/sux0r206/
    • Click on Register (Top Right of Screen)
    • Enter a Nickname, Email Address and Password
    • Verify Your Password
    • Add Any Additional Information
    • Enter the Anti-Spam Code
    • Click Submit
  2. How to Login and Subscribe to RSS Feeds http://screenr.com/ckA
    • Once you have registered an account click on Login (Top Right of Screen)
    • Enter your Nickname and Password
    • Click on Feeds (You will be presented with a list of all feeds on your first login)
    • Scroll to the bottom of the list and click on Manage Feeds
    • Select the Checkboxes of the feeds you would like to subscribe to
    • Click Submit
    • You can add a new feed by clicking on Suggest Feed (an administrator will need to approve the feed first)
    • You can browse the feeds by clicking on the titles of the feeds
  3. How to train Bayesian Feed Filter to Filter your RSS Feeds http://screenr.com/3kA
    • Once you have logged in to your account and subscribed to some feeds you can start training
    • Click on your nickname (Top right ofthe screen)
    • Click on Edit Bayesian
    • Enter the name of a vector (list of categories) and click add (in this case the vector is called Interestingness)
    • Enter the name of your first category and click add (in this case Interesting)
    • Enter the name of your secondcategory and click add (in this case Not Interesting)
    • Click On your nickname then on Feeds
    • You can start training items by clicking on the drop down menu of categories
    • If the item is already displaying the category you wish to train it in you will first need to select the other category then reselect the correct category
    • Items that have been trained will display the Vector as green text
  4. How to train Bayesian Feed Filter using other documnets http://screenr.com/vSK
    • Once you have logged in to your account and subscribed to some feeds you can start training
    • Click on your nickname (Top right of the screen)
    • Click on Edit Bayesian
    • Copy and paste text from other documents into the Train Document text area
    • Select the category and click train
    • You can also categorise other documents
    • Copy and paste text from other documents into the Categorize Document text area
    • Select the vector and click categorize.
    • The probability of the document belonging to each category in the vector will be displayed.
  5. How to view filtered RSS Items by threshold/keywords http://screenr.com/y1K
    • Click on Feeds
    • At the top of the screen select the category and set a threshold
    • Click on threshold
    • Only the items relevant to the selected categroy above the set threshold are displayed
    • To filter by keywords, type your keywords into the keywords text box
    • Click on threshold
    • Only the items containing those keywords will be displayed

1 Comment

Filed under dissemination

BayesFF: Final post

Diagram of prototype: schematically we can show how the prototype supports the aggregation and of RSS feeds comprising table of contents information from selected journals and filters them (using pre-existing software called sux0r) into two feeds, one of which is has information about those papers that are predicted to be relevant to a user’s research interests. The project has added the ability to interact with sux0r through third-party software.

Our work has shown how effectively this works for a trial group of researchers; in most cases, after sufficient training of the system, the outgoing feeds were successfully filtered so that one contained a significantly higher concentration of interesting items than the raw feeds and the other did not contain a significant number of interesting items.

End User of Prototype:
We have an installation of sux0r which people are welcome to register on and which can be used to set up feeds for aggregation (you will not automatically be given sufficient privileged to approve feeds, so it is best to contact the project about this). The base URL for the API for this installation is http://icbl.macs.hw.ac.uk/sux0rAPI/icbl/ and the API calls which have been implements are documented in the following posts on this blog: Return RSS items for a user and ReturnVectors and ReturnCategories. Also available: a summary of other features for the API have been scoped. The latest update was 08 December 2009.

Here’s a screen cast of Lisa using the API


(NB the version at the end of the link is a whole lot clearer than the embedded YouTube version, especially if you click on the view in HD option).

The code for our work on the API is in a branch of the main sux0r repository on sourceForge.

Project Team
Phil Barker, philb@icbl.hw.ac.uk, Heriot-Watt University (project manager)
Santiago Chumbe, S.Chumbe@hw.ac.uk, Heriot-Watt University (developer)
Lisa J Rogers, l.j.rogers@hw.ac.uk, Heriot-Watt University (researcher)

Project Website: http://www.icbl.hw.ac.uk/bayesff/
PIMS entry: https://pims.jisc.ac.uk/projects/view/1360

Table of Content for Project Posts
Development work

User trialling

Community Engagement

Project Mangement

2 Comments

Filed under management

noAuth

One of the “weaknesses” I put in the SWOT analysis was that we had a lot to learn. Fully understanding and implementing authentication and authorization for the API was one of the things that we had to learn. As of now, at the end of the funded work on the project, we seem to have failed in this.

Our first point of failure was in being pointlessly over ambitious in what we wanted to do via the API. When drawing up the initial feature set for the API I took the starting position that anything that you could do through the native sux0r interface should be doable remotely; so the feature set included register new user. This muddied the requirements for accessing the sux0r security procedures in a way that I can now see was quite unnecessary–it’s really not unreasonable to expect people to have an account with a service before the interact with it from another application.

Having clarified this it became clear that oAuth would be the authorization mechanism of choice, though we had no experience in implementing it. Santy got a client working with twitter and flickr based on Andy Smith’s library. He used
Google PHP OAuth library for the server on sux0r, but it didn’t work with either that client or Google’s own client. There is another library he would like to test for the server side, but had already spent more time than was available.

Struggling with oAuth meant less time to spend on actual features. In retrospect we should have implemented the features without authorization in the hope of adding some form of authorization later (which is indeed what Santy has done towards the end of the project), but it is always tempting to keep trying one more thing in the hope that the next try will succeed.

As a result we have fewer features implemented than we planned, and features that should require authorization don’t have it. We still hope to add some form of restriction on access, even HTTP digest authentication requiring sux0r user name and password to be entered into the third-party app is better than nothing.

Lessons learnt: 1) you don’t have to do everything through an API (god, that seems obvious when I write it); 2) get on with what you can do in parallel to trying to overcome road blocks; 3) analysing the problem and implementing the client did give us a better understanding of what oAuth should do.

2 Comments

Filed under management, technical

User Trials Follow Up Satisfaction Survey

The user trials consisted of 5 main stages.

  • An initial meeting to demonstrate the system.
  • An initial questionnaire to gather expectations
  • Training: Users spent between 4-6 weeks training the system
  • A follow up meeting to indicate how successfully their interests had been matched; and;
  • A follow up questionnaire to gauge the users’ satisfaction.

The results of the follow up survey are discussed in more detail below.
Question 1.

Were enough “Not Interesting” articles filtered out of the “Interesting” feed to make reading this feed manageable?

Though the percentages of interesting items delivered to each user were in general lower than the users had indicated would be acceptable in the initial questionnaire. The users seemed to be happy with this result and in most cases the percentage of “not interesting” in the “interesting” feed was greatly reduced.

13 users answered yes, 4 answered no and 1 was not sure.

Question 2.

If the “Not Interesting” feed wrongly contained “interesting” articles, was the percentage small enough to tolerate?

The majority of the users were able to tolerate some “interesting” articles being filtered out into the “not interesting” feed.

15 users answered yes; 3 answered no.

Question 3.

Would you consider using a similar tool in the future?

The majority of users indicated that they would consider using a similar tool in the future. This gives us a certain confidence that the concept of applying Bayesian filtering to journal articles is worth investigating further.

15 users answered yes; 2 answered maybe; 1 answered no.

Question 3 cont…

If yes, which of the following would you consider?

[a] A stand alone tool?
[b] A tool integrated into an existing tool you use everyday i.e. in an email client/feed reader/iGoogle?
[c] Integrated into a library or research tool such as web of science?

Users were able to enter more than one choice.

There were 6 votes for [a]; 13 votes for [b]; 12 votes for [c]

Users were then asked which of the above would be their preferred option?

3 voted for [a]; 6 voted for [b]; 7 voted for [c]; 1 user thought daily/weekly email alerts would be a better option.

The strong preference for integration into other tools (options b and c) rather than use as a stand alone tool is interesting as it validates our supposition that an API would be useful, i.e. that it would be desirable to be able to integrate interact with sux0r into other tools.

Question 4.
If you would consider using a similar tool in the future, what do you think the advantages of doing so would be?

The main advantages offered by the users included time saving by filtering out unwanted articles, the ability to scan more journals and a single place to scan the latest articles form interesting journals. Only one user considered a similar tool not to have any advantages.

A selection of responses follow below:

If trained sufficiently the tool would save time in showing the searches from interesting results, with keywords on saved interests.

To flag up interesting articles without the user having to actively search for them i.e. it would help with horizon scanning.

Make e-journals more helpful when filtering interesting articles and not interesting ones.

1. One advantage would be a single place to find interesting reserach articles. 2. If the feed is trained well, then less time is spent on uninteresting articles. 3. If it is integrated into broader serach tools like iGoogle it would have wider reach.

As it highlights interesting/prospectively interesting journals that you may not be able to find easily using databases search such as science direct.

Quicker sorting of interesting and not interesting articles

Keeping up to date with new articles. But disadvantage is the guilt of seeing all the interesting things you should read but don’t have time to.

Saving time. However I am not sure I would be completely confident in the results I would get.

Screening for new articles would become more organised rather than my random search at the moment which only happens when I need to find information.

Tend to search on the basis of keywords; this appears to work better.

It does appear to throw up interesting articels that I might otherwise miss.

Time saving and effective worktime

Obviously it will save a lot of time

Simultaneous filtering of many journals

Make looking for papers more fun because much of the clutter is removed compared to reading journal indexes. And I find more interesting articles compared to googling or searching by keyword.

a) save time, reduce number of articles. b) We can create research group feed of interest

Even with uninteresting articles in the mix it still allowed me to find dozens of articles that would have passed me by otherwise. I felt it was worth the effort & still a lot less effort than reading all the tables of contents would have been. A key advantage for me was that it effectively allowed me to, in a similar length of time, scan the contents of a far greater number of journals than I would have studied by hand. A worthwhile tool if you can be bothered to train it.

Get an overview of recently published articles with at least some relevance to me, which at the moment I’m not getting.

2 Comments

Filed under trialling

Features: ReturnVectors and ReturnCategories

The Return Items for a user feature assumes that if you want to get only those items that have been classified under a certain category you know the numerical code used by sux0r to identify the vector and category. These features allow you to find those codes.

Return vectors for a user
The full design for this feature is available, however the current implementation does not cover the authentication requirements.

The API call is an HTTP GET on [sux0rURL]/api/vectors/ (where [sux0rURL] is the URL for your sux0r installation, for this project that is http://icbl.macs.hw.ac.uk/sux0rAPI/icbl/ ). The only parameter is user= to specify a username.

examples
HTTP GET on http://icbl.macs.hw.ac.uk/sux0rAPI/icbl/api/vectors/?user=philb will return a list the vectors used by philb (me). The data returned is pretty self-explanatory, in this case you get:

<?xml version="1.0"?>
<response xmlns:api="http://icbl.macs.hw.ac.uk/sux0rAPI/api/xmlns">
  <api:userNickname>philb</api:userNickname>
  <api:vectors>
    <api:vector>
      <api:vectorID>6</api:vectorID>
      <api:vectorName>WorkInterest</api:vectorName>
    </api:vector>

    <api:vector>
      <api:vectorID>33</api:vectorID>
      <api:vectorName>CETIS-Domain</api:vectorName>
    </api:vector>
  </api:vectors>
</response>

Return vectors for a user’s category

The full design for this feature is available, however the current implementation does not cover the authentication requirements.

The API call is an HTTP GET on [sux0rURL]/api/categories/ (where [sux0rURL] is the URL for your sux0r installation, for this project that is http://icbl.macs.hw.ac.uk/sux0rAPI/icbl/ ). There are two required parameters
user to specify a username
vec_id to specify the id of a vector used by that user.

examples
HTTP GET on http://icbl.macs.hw.ac.uk/sux0rAPI/icbl/api/categories/?user=philb&vec_id=6 will return a list the categories used by philb (me) for the vector with id number 6 (which is “work interest”). The data returned is pretty self-explanatory, in this case you get:

<?xml version="1.0"?>
<response xmlns:api="http://icbl.macs.hw.ac.uk/sux0rAPI/api/xmlns">
  <api:userNickname>philb</api:userNickname>
  <api:categories>
    <api:vector>
      <api:vectorID>6</api:vectorID>
      <api:vectorName>WorkInterest</api:vectorName>
    </api:vector>

    <api:category>
      <api:categoryID>12</api:categoryID>
      <api:categoryName>interesting</api:categoryName>
    </api:category>
    <api:category>
      <api:categoryID>13</api:categoryID>
      <api:categoryName>not interesting</api:categoryName>

    </api:category>
  </api:categories>
</response>

Error trapping
Unfortunately we couldn’t implement the error codes properly on our server, you get an HTTP status code of 200-OK whether or not it is. However if you specify an invalid user name or vector id you do get sensible error messages returned, which include links to set you on the right track.

1 Comment

Filed under technical

Feature implemented: Return RSS items for a user

The single most important feature that we are adding with this project is the ability to publish feeds from sux0r corresponding to specified criteria, for example a feed aggregated from all the feeds that a user is subscribed to that have been classified under the same heading by the Bayesian algorithm. (Here’s the full specification if you’re interested). We have now completed work on this.

The API call is an HTTP GET on [sux0rURL]/api/items/ (where [sux0rURL] is the URL for your sux0r installation, for this project that is http://icbl.macs.hw.ac.uk/sux0rAPI/icbl/ ). The parameters you can use are:
user to specify the user name;
vec_id to specify the vector id;
cat_id to specify the category id;
feed_id to specify the id or URL of the feed;
keywords to specify any keywords for filtering the result feed;
threshold to specify the threshold value for the probable relevance against the category;
maxHits to specify a maximum number of hits to return.

Sorting wasn’t implemented, the default sort order is on date. Also we didn’t get authentication working (but we dithered about whether it was necessary for this feature anyway, and life is easier if you can just get a feed into any feed reader).

Examples:
http://icbl.macs.hw.ac.uk/sux0rAPI/icbl/api/items/?user=philb&maxHits=20
Gives the most recent 20 items from all the feeds to which user philb (that’s me!) subscribes. (I should note that not many of the feeds I subscribe to are Journal ToCs, so I’m not really using this for the type of feed for which it was intended. Nevertheless I find it kind of works.)

http://icbl.macs.hw.ac.uk/sux0rAPI/icbl/api/items/?user=philb&keywords=jisc&maxHits=20
Gives the most recent 20 items containing the word jisc from all the feeds to which I subscribe. Try changing jisc to jisc cetis or “jisc cetis” or “jisc AND cetis”.

http://icbl.macs.hw.ac.uk/sux0rAPI/icbl/api/items/?user=philb&vec_id=12&cat_id=24&threshold=0.5&maxHits=30
This is more interesting, vector 12 is my vector for classifying relevance to my research interests and category 24 is the stuff that is relevant. So this a feed of the stuff that is predicted to be relevant to my research interests (since the probability threshold is set to 0.5).

The results feed for that last call looks like this:

<?xml version="1.0"?>
<rss version="2.0" xmlns:api="http://icbl.macs.hw.ac.uk/sux0rAPI/api/xmlns" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Philb's RSS ItemsVector ID: 12, Category ID: 24, Threshold: 0.5, maxHits: 30</title>
    <link>http://icbl.macs.hw.ac.uk/sux0r206/user/profile/philb</link>
    <description>Use Case: Return the RSS Items for a User. User Nickname: philb. Summary of applied filters:  Threshold: 0.5;  maxHits: 30 results</description>
        <atom:link href="http://icbl.macs.hw.ac.uk/sux0rAPI/icbl/api/items/?user=philb&amp;vec_id=12&amp;cat_id=24&amp;threshold=0.5&amp;maxHits=30" rel="self" type="application/rss+xml" />
    <item>
      <title>An infrastructure service anti-pattern</title>
      <link>http://blog.paulwalk.net/2009/12/07/an-infrastructure-service-anti-pattern</link>
      <guid>http://blog.paulwalk.net/2009/12/07/an-infrastructure-service-anti-pattern</guid>
      <description>Last week I outlined an idea, that of the service anti-pattern, as part of a presentation I gave last week to the Resource Discovery Taskforce (organised by JISC in partnership with RLUK). The idea seemed to really catch the interest of and resonate with several of those members of the taskforce who were present at [...]</description>
      <pubDate>Mon, 07 Dec 2009 10:37:05 EST</pubDate>
      <source url="http://blog.paulwalk.net/feed">paul walk's weblog</source>
      <api:relevance>1</api:relevance>
    </item>
    <item>
      <title>Statistics of user trial results</title>
      <link>https://bayesianfeedfilter.wordpress.com/2009/12/07/statistics-of-user-trial-results</link>
      <guid>https://bayesianfeedfilter.wordpress.com/2009/12/07/statistics-of-user-trial-results</guid>
      <description>We now have results from our user trials showing how effective sux0r may be in filtering items from journal table of contents RSS feeds that are relevant to a user’s research interests. Quick reminder of how we ran the trials: 20 users had access to sux0r for 4 weeks to train the analyser in what [...]</description>
      <pubDate>Mon, 07 Dec 2009 07:41:18 EST</pubDate>
      <source url="https://bayesianfeedfilter.wordpress.com/feed">Bayesian Feed Filter</source>
      <api:relevance>1</api:relevance>
    </item>
<!--lots more items-->
  </channel>
</rss>

Apart from an additional element for the relevance of the item to the specified category, it’s plain RSS 2.0.

Unfortunately we couldn’t implement the error codes properly on our server, you get an HTTP status code of 200-OK whether or not it is. Also, I think there are some error conditions that we don’t trap satisfactorily, for example specifying a non-existent user or category.

3 Comments

Filed under technical

Statistics of user trial results

We now have results from our user trials showing how effective sux0r may be in filtering items from journal table of contents RSS feeds that are relevant to a user’s research interests.

Quick reminder of how we ran the trials: 20 users had access to sux0r for 6 weeks to train the analyser in what they found interesting and not interesting. We then barred access for 4 weeks but continued to aggregate feeds and filter them based on that training. Then we invited the users to look at the results of the filtering: two feeds from sux0r; one aggregating information about journal articles that had been published while the users were barred that sux0r predicted the user would find relevant; the other feed had information about the rest of the articles, the ones that sux0r predicted the user wouldn’t find relevant. We had our users look through both feeds and tell us whether the articles really were relevant to their research interests. We lost two triallists and so have data on 18, you can see this data as a web page (or get the spreadsheet if you prefer).

The initial data needs a little explanation: The first columns (in yellow) relate to the number of items used in the initial six weeks to train the Bayesian analyser in what what was relevant to the users research interests, what wasn’t, and the total number of items used in training. The “Additional docs” column relates to information added that didn’t come from the RSS feeds: was asked users to provide some documents that were relevant to their research interested for training in order to make up for the fact that in a fairly short trial period the number of items published that were relevant may be low.

The next set of columns (in green) relate to the feed of items aggregated after the training (while the users had no access) that were predicted to match the user research interests, showing the number of items of interest in that feed, the total number of items in that feed and the proportion of items in the feed that were interesting. The next three columns (in red) do exactly the same for the feed of items that were predicted not to be relevant.

For a quick overview of the results, here’s a chart of the fraction of interesting items in both feeds:

You need to be careful interpreting this chart. It hides some things, for example, the data point showing that the fraction of interesting items in one of the feeds was 1 (i.e. the feed of interesting items did indeed only have interesting items in it) hides the fact that this feed only had 2 items in it; the user found 9 items overall to be relevant to their research interest, 7 of them were in the wrong feed. Perhaps that’s not so good.

So, did it work? Well, one way of rephrasing that question is to ask whether the feed that was supposed to be relevant (the “interesting feed”) did indeed contain more items relevant to the users research interests than than would otherwise have been the case. That is, is the proportion of interesting items in the interesting feed higher than the proportion of interesting items in the two feeds combined. The answer in all but one case is yes; typically by a factor of between two and three. (The exception is a feed which achieved similar success in getting it wrong. We don’t know what happened here.)

Also we can look at the false negatives, i.e. the number of items that really were of relevance to the user’s interests that were in the feed that was predicted not to be interesting. The chart above shows quite nicely that after using about 150 items for training this was very low.

What about some statistics? It’s worth checking whether the increase in concentration of items related to a user’s research interest as a result of filtering is statistically significant. We used a two sample Z test to compare the difference in the proportion of interesting items in the two feeds to the magnitude of difference that could be expected to happen as the result of chance:
.

I have some reservations about this because of the small number of “interesting” items found in the feed which should be uninteresting when the filtering works–this means that one of the assumptions of the Z-test might not be valid when the filtering is working best–but any value of Z above 3 cannot be reasonably expected to have happened by chance.

Conclusion: for users who used more than about 150 items in training the filtering produces a statistically significant improvement in the number of items in the feed that were relevant to the user’s research interests without filtering out a large number of items that would have been of interest. Next post: were the users happy with these results?

2 Comments

Filed under trialling