Tag Archives: rapidInnovation

Conducting a User Trial

One of the aims of the Bayesian Feed Filter Project was to test the ability of the recommender service to identify new journal papers of interest to researchers based on a knowledge of papers which they have recently read. The recommender service used was sux0r a blogging package, an RSS aggregator, a bookmark repository, and a photo publishing platform with a focus on Naive Bayesian categorization and probabilistic content .

As well as creating an API for sux0r, the project created a Bayesian Feed Filter theme which included simplifying the sux0r interface so that user saw only the RSS Aggregator with Bayesian Filtering. The Bayesian Feed Filter uses Bayes’ theorem to attempt to predict whether or not a new item in a feed is relevant to an individual’s research interests based on previous categorization of items by the user. This explicit categorization by the user is known as training; the system also allows for other text documents to be used as training material.

Twenty researchers from Engineering and Science based schools within Heriot-Watt University volunteered to participate in the trial to test the ability of the Bayesian Feed Filter to identify new journal papers of interest to them based on knowledge of papers which they have recently read. The volunteers were asked to provide a list of journals that they follow or would like to follow if they had the time. Each volunteer was set up with an account on Bayesian Feed Filter, which was preloaded with RSS Feeds of the journals they said they were interested in and contained two categories for training: Interesting and Not Interesting.

An API was developed during the project which included the feature Return RSS Items for a User, which was used to create personalised RSS feeds for each user. The feeds could be filtered by category (interesting or not interesting) and by threshold (likelihood to belong to a particular category).


Stage 1: Initial Questionnaire

The first stage of the trial involved a short questionnaire to gauge the researchers’ methods of current awareness and their expectations of a service filtering journal articles matching their interests. (Results of Initial Questionnaire).

Stage Two: Demonstration of the Bayesian Feed Filter
Volunteers were each given a demonstration of how to mark items as relevant to their interests or not relevant to their interests. These items typically include the title and abstract of the journal article. The users were also shown how to use the train document feature which would allow them to include text not in the RSS feeds such as the full text of articles they had written, cited or read. (How to use Bayesian Feed Filter)

Stage Three: Training the Bayesian Feed Filter
The volunteers had access to the Bayesian Feed Filter for 6 weeks and they were asked to train the system by categorizing items as either “interesting” or “not interesting” periodically and to supplement the interesting items with other documents relevant to their interests. (User Activity).

At the end of the six week training period access to the Bayesian Feed Filter was suspended and all articles in the system were removed. The system would continue to run for 4 weeks, automatically catagorizing new articles as being “interesting” or “not interesting” to the researchers based upon the training provided. Unfortunately, two of our volunteers were not able to continue with the trial, therefore the trial continued with 18 volunteers.

Stage Four: Returning the Filtered Feeds
The users were presented with the two feeds. One feed comprised articles rated by the feed filter with at least a 50% chance of being of interest to them and the other feed articles rated with at least a 50% chance of not being of interest to them. The feeds were presented to the user using Thunderbird (an email and RSS client). Users were then asked to mark each article from both feeds with a star if they found it to be of interest. Thus the feeds represent the Bayesian Feed Filters categorization of items into “interesting” and “not interesting” and the stars show the users opinion of whether the items are relevant to their research interests of not.

The number of false positives (items in the interesting feed not starred) and number of false negatives (items in the not interesting feed starred) could then be calculated for each user. A successful scenario would be for the interesting feed to contain a significantly higher proportion of interesting articles than an unfiltered feed with few items of interest wrongly filtered into the “not interesting” feed. The success of the filtering seems to be dependant on the training provided, with users who trained over 150 items seeming to get a reasonable measure of success. (Statistics from the User Trials).

Stage 5: Follow Up Questionnaire
The final stage of the of the trial was a follow up questionnaire, in order to gauge the user’s satisfaction with the filtering process and whether they would be interested in using a similar system in the future and what the advantages of doing so would be. (Results of the Follow Up Satisfaction Survey).

Advertisements

Comments Off on Conducting a User Trial

Filed under trialling

How to Install the Bayesian Feed Filter

The Bayesian Feed Filter (BayesFF) is an optional interface for the popular sux0r software package. To be able to use the BayesFF interface you only need to follow the normal process for installing sux0r and make a few edits in the sux0r configuration file.

The BayesFF interface will allow you to use the API and the web interface developed by the BayesFF project. In general, installing sux0r is a simple process that takes less than 30 minutes to complete, depending of the type of PHP configuration found in your web server. You may want to ask your IT support team to install sux0r for you, if you are not familiar with installing and configuring PHP packages that would require access to the web server configuration files. However, if you wish to install sux0r yourself, the following detailed installation guide would help you.

A. Prerequisites

* Configuring PHP to enable mb, gd, and PDO libraries:
– mb is non-default extension and you need to explicitly enable it with the configure option. See http://www.php.net/manual/en/mbstring.installation.php webpage for details
– gd represents the GD library that you will need to install (available at http://www.libgd.org/) and enable with the configure PHP command. See http://www.php.net/manual/en/image.installation.php webpage for details
– PDO driver is enabled by default as of PHP 5.1.0, but you may need to enable it to work with MySQL. Please consult the documentation at http://www.php.net/manual/en/pdo.installation.php and http://www.php.net/manual/en/ref.pdo-mysql.php web pages to find out more about PDO installation.

* MySQL 5.0.x, set to support UTF characters
(Further information on http://dev.mysql.com/doc/refman/5.0/en/charset-connection.html)

* Apache 2.x webserver with mod_rewrite module enabled
(a simple but good tutotial on enabling mod_rewrite can be found at http://www.tutorio.com/tutorial/enable-mod-rewrite-on-apache)

B. Installation

To install sux0r code on your web server:
1. Login to your server and go to the directory where you want to install sux0r
2. Execute the following Unix command:
svn export https://sux0r.svn.sourceforge.net/svnroot/sux0r/branches/icbl/
3. Execute these two commands:
chmod 777 ./data
chmod 777 ./temporary

To create the MySQL database and tables for sux0r:
4. Create a database named “sux0r” on your MySQL server
5. Import ./supplemental/sql/db-mysql.sql into MySQL

C. Configuartion

1. From the shell, execute these commands:
mv ./sample-config.php ./config.php
mv ./sample-.htaccess ./.htaccess

2. Edit ./config.php and ./.htaccess appropriately (follow the instructions included inside these files.) The changes you need to make are pretty obvious.

Edit Database Connection: $CONFIG[‘DSN’]
Edit URL for your intallation of sux0r: $CONFIG[‘URL’]
Edit Title: $CONFIG[‘TITLE’]
If you want to use the BayesFF interface, you will need to change the default value of the $CONFIG[‘PARTITION’] configuration parameter found in config.php,
from:
$CONFIG[‘PARTITION’] = ‘sux0r’;
to:
$CONFIG[‘PARTITION’] = ‘bayesff’;

3. To check your installation, run the ./supplemental/dependencies.php script from your browser. Example:
http://yourwebsite/sux0r210/supplemental/dependencies.php (If there are no errors OK will be returnes with a link to your new installation.

4. If the previous step didn’t produce any error, point your web browser to http://yourwebsite/sux0r210/supplemental/root.php’ and follow the onscreen instructions to make yourself a sux0r root user.

5. Setup a CRON job to fetch RSS feeds every x minutes (we recommend you to start by running the CRON every 60 minutes). The PHP script that fetches the feeds is already provided by sux0r and it is available at http://yourwebsite/sux0r210/modules/feeds/cron.php
For example:
0 * * * * /bin/nice /usr/bin/wget -q -O /dev/null “http://yourwebsite/sux0r210/modules/feeds/cron.php” > /dev/null 2>&1

6. Delete the ./supplemental directory from the webserver.

Sux0r should now be successfully installed on your website.

1 Comment

Filed under dissemination, technical

How To Use Bayesian Feed Filter

I have created 5 screen casts showing users how to use the Bayesian Feed Filter.

  1. How to Register an account on Bayesian Feed Filter http://screenr.com/WkA
    • Go to http://icbl.macs.hw.ac.uk/sux0r206/
    • Click on Register (Top Right of Screen)
    • Enter a Nickname, Email Address and Password
    • Verify Your Password
    • Add Any Additional Information
    • Enter the Anti-Spam Code
    • Click Submit
  2. How to Login and Subscribe to RSS Feeds http://screenr.com/ckA
    • Once you have registered an account click on Login (Top Right of Screen)
    • Enter your Nickname and Password
    • Click on Feeds (You will be presented with a list of all feeds on your first login)
    • Scroll to the bottom of the list and click on Manage Feeds
    • Select the Checkboxes of the feeds you would like to subscribe to
    • Click Submit
    • You can add a new feed by clicking on Suggest Feed (an administrator will need to approve the feed first)
    • You can browse the feeds by clicking on the titles of the feeds
  3. How to train Bayesian Feed Filter to Filter your RSS Feeds http://screenr.com/3kA
    • Once you have logged in to your account and subscribed to some feeds you can start training
    • Click on your nickname (Top right ofthe screen)
    • Click on Edit Bayesian
    • Enter the name of a vector (list of categories) and click add (in this case the vector is called Interestingness)
    • Enter the name of your first category and click add (in this case Interesting)
    • Enter the name of your secondcategory and click add (in this case Not Interesting)
    • Click On your nickname then on Feeds
    • You can start training items by clicking on the drop down menu of categories
    • If the item is already displaying the category you wish to train it in you will first need to select the other category then reselect the correct category
    • Items that have been trained will display the Vector as green text
  4. How to train Bayesian Feed Filter using other documnets http://screenr.com/vSK
    • Once you have logged in to your account and subscribed to some feeds you can start training
    • Click on your nickname (Top right of the screen)
    • Click on Edit Bayesian
    • Copy and paste text from other documents into the Train Document text area
    • Select the category and click train
    • You can also categorise other documents
    • Copy and paste text from other documents into the Categorize Document text area
    • Select the vector and click categorize.
    • The probability of the document belonging to each category in the vector will be displayed.
  5. How to view filtered RSS Items by threshold/keywords http://screenr.com/y1K
    • Click on Feeds
    • At the top of the screen select the category and set a threshold
    • Click on threshold
    • Only the items relevant to the selected categroy above the set threshold are displayed
    • To filter by keywords, type your keywords into the keywords text box
    • Click on threshold
    • Only the items containing those keywords will be displayed

1 Comment

Filed under dissemination

BayesFF: Final post

Diagram of prototype: schematically we can show how the prototype supports the aggregation and of RSS feeds comprising table of contents information from selected journals and filters them (using pre-existing software called sux0r) into two feeds, one of which is has information about those papers that are predicted to be relevant to a user’s research interests. The project has added the ability to interact with sux0r through third-party software.

Our work has shown how effectively this works for a trial group of researchers; in most cases, after sufficient training of the system, the outgoing feeds were successfully filtered so that one contained a significantly higher concentration of interesting items than the raw feeds and the other did not contain a significant number of interesting items.

End User of Prototype:
We have an installation of sux0r which people are welcome to register on and which can be used to set up feeds for aggregation (you will not automatically be given sufficient privileged to approve feeds, so it is best to contact the project about this). The base URL for the API for this installation is http://icbl.macs.hw.ac.uk/sux0rAPI/icbl/ and the API calls which have been implements are documented in the following posts on this blog: Return RSS items for a user and ReturnVectors and ReturnCategories. Also available: a summary of other features for the API have been scoped. The latest update was 08 December 2009.

Here’s a screen cast of Lisa using the API


(NB the version at the end of the link is a whole lot clearer than the embedded YouTube version, especially if you click on the view in HD option).

The code for our work on the API is in a branch of the main sux0r repository on sourceForge.

Project Team
Phil Barker, philb@icbl.hw.ac.uk, Heriot-Watt University (project manager)
Santiago Chumbe, S.Chumbe@hw.ac.uk, Heriot-Watt University (developer)
Lisa J Rogers, l.j.rogers@hw.ac.uk, Heriot-Watt University (researcher)

Project Website: http://www.icbl.hw.ac.uk/bayesff/
PIMS entry: https://pims.jisc.ac.uk/projects/view/1360

Table of Content for Project Posts
Development work

User trialling

Community Engagement

Project Mangement

2 Comments

Filed under management

Preliminary findings of user trials

We’re now coming to the end of the user-trials, here are some preliminary conclusions which mostly relate to the start of the trails when we gave our users a questionnaire to try to check our assumptions of what would help and their expectations of what we might do.

Our users come from the Science and Engineering schools at Heriot-Watt University, they’re computer scientists, engineers, physicists, chemists, bioscientists and mathematicians. Just over half are PhD students, most of the others are post-docs though there are two lecturers and a professor.

This still seems like a good idea.
That is to say, potential users seem to think it will help them. We wanted 20 volunteer users for the trial and we didn’t find it difficult to get them; in fact we got 21. Nor was it too difficult to get them to use Sux0r; only one failed to use it in to the extent we required. Of course there was a bit of chivvying involved, and we’re giving them an amazon voucher as a thank-you when they complete the trial, which has probably helped, but compared to other similar evaluations it hasn’t been difficult to get potential users engaged with what we’re trying to do.

Our assumptions about how researchers keep up to date is valid for a section of potential users.
We assumed that researchers would try to keep up to date with what was happening in their field my monitoring what was in the latest issues of a defined selection of relevant journals. That is true of most of them to some extent. So for example 11 said that they received email alerts to stay up to date with journal papers. On the other hand the number of journals monitored was typically quite small (5 people looked at none; 8 at 1-4; 6 at 5-10; and 2 at 11-25). This matched what we heard from some volunteers that monitoring current journals wasn’t particularly important to them compared to fairly tightly focused library searches when starting a new project and hearing about papers through social means (by which I mean through colleagues, at conferences and through citations). Our impression is that it was the newer researchers, the PhD students, who made more use of journal tables of content. This would need checking, but perhaps it could be because they work on a fairly specific topic for a number of years and are less well connected to the social research network whereas a more mature researcher will have accreted a number of research interests and will know and communicate with others in the same field.

Feeds alone won’t do it.
Of our 21 mostly young science and technology researchers, 9 know they use RSS feeds (mostly through a personal homepage such as Netvibes), 5 don’t use them but know what they are, 7 have never heard of them; 2 use RSS feeds to keep up to date with journals (the same number as use print copies of journals and photocopies of journal ToCs), compared with 11 who use email alerts.

If you consider this alongside the use of other means of finding new research papers I think the conclusion is that we need to embed the filtered results into some other information discovery service rather than just provide an RSS feed from sux0r. Just as well we’re producing an API.

We have defined “works” for filtering
We found that currently fewer than 25% of articles in a table of contents are of interest to the individual researchers, and they have an expectation that this will rise to 50% or higher (7 want 50%, 7 want 75% and one wants everything to be of interest) in the filtered feed. On the other hand false negatives, that is the interesting articles that wrongly get filtered out, need to be lower than 5-10%.

Those are challenging targets. We’ll be checking the the results against them in the second part of the user tests (which are happening as I’ve been writing this), but we’ll also check whether what we do achieve is perceived as good enough.

Just for the ultra-curious among you, here’s the aggregate data from the questionnaire for this part of the trials

Total Started Survey: 21

Total Completed Survey: 21 (100%)

No participant skipped any questions

1. What methods do you use to stay up to date with journal papers?
Email Alerts 52.4% 11
Print copy of Journals 14.3% 3
Photocopy of Table of Contents 9.5% 2
RSS Feeds 9.5% 2
Use Current Awareness service (i.e. ticTOCs) 4.8% 1
None   0.0% 0
Other (please specify) 61.9% 13
2. How do you find out when an interesting paper has been published?
Find in a table of contents 14.3% 3
Alerted by a colleague 38.1% 8
Read about it in a blog 9.5% 2
Find by searching latest articles 76.2% 16
Other (please specify) 47.6% 10
3. How many journals do you regularly follow?
None 23.8% 5
1-4 38.1% 8
5-10 28.6% 6
11-25 9.5% 2
26+   0.0% 0
4. Do you subscribe to any RSS Feeds.
Yes, using a feed reader (i.e. bloglines, google reader) 9.5% 2
Yes, using a personal homepage (i.e. iGoogle, Netvibes, pageflakes) 23.8% 5
Yes, using a desktop client (thunderbird, outlook) 4.8% 1
Yes, using my mobile phone 4.8% 1
No, but I know what RSS Feeds are 23.8% 5
No, never heard of them 33.3% 7
Other (please specify)   0.0% 0
5. When scanning a table of contents for a journal you follow, on average, what percentage of articles are of interest to you?;
100%   0.0% 0
Over 75%   0.0% 0
Over 50% 4.8% 1
Over 25% 19.0% 4
Less than 25% 71.4% 15
I don’t scan tables of contents 4.8% 1
6. The Bayesian Feed Filter project is investigating a tool which will filter out articles from the latest tables of contents for journals that are not of interest to you.
What would be an acceptable percentage of interesting articles for such a tool?
I would expect all articles to be of interest 4.8% 1
I would expect at least 75% of articles to be of interest 33.3% 7
I would expect at least 50% of articles to be of interest 33.3% 7
I would expect at least 25% of articles to be of interest 19.0% 4
I would only occasional expect an article to be of interest 9.5% 2
7. What percentage of false negatives (i.e. wrongly filtering out interesting articles) would be acceptable for such a tool?
0% (No articles wrongly filtered out) 14.3% 3
<5% 23.8% 5
<10% 38.1% 8
<20% 4.8% 1
<30% 4.8% 1
<50%   0.0% 0
False negatives are not a problem 14.3% 3
8. What sources of research literature do you follow?
Journal Articles 95.2% 20
Conference proceedings 71.4% 15
Pre-prints 14.3% 3
Industry News 33.3% 7
Articles in Institutional or Subject Repositories 19.0% 4
Theses or Dissertation 57.1% 12
Blogs 33.3% 7
Other (please specify) 19.0% 4

4 Comments

Filed under trialling

BayesFF in 45 seconds

I’m doing a 45 second presentation on the Bayes Feed Filter project at the JISC Rapid Innvation Development meeting in Manchester today. This is it:

The Bayesian Feed Filter will help researchers keep up to date with current developments in thier field. It will automatically filter RSS and ATOM feeds from Journals’ tables of content to (hopefully) select those that are relevent to an individual’s research interests.

It uses Bayesian statistical analysis, the same approach used in many spam filters. First you need to train it with samples of what you are and aren’t interested in; then it compares the frequency with which words occur in the text to predict whether new items are on a similar topic to the samples that you were interested in.

We are testing whether this approach works for researchers and Table of Content feeds and building an API, so would like to talk anyone who can use it to personalize their own data presentation.

3 Comments

Filed under dissemination

New features planned for sux0r

My last post described what sux0r already does, this one describes the features for the API that we plan to add.

The idea is to allow users of a remote application to classify feeds and to see the results, i.e. do what was described in that last post but without using the sux0r interface. The hope is that this will allow the use of the filter to be embedded in their own personal toolset, and more generally make the functionality of sux0r as a feed filter/classifier available to other services and applications.

To do this we think the API needs to provide access to the following sux0r functionality (the priority refers to our priority for implementing the feature):

1. Authorise account access for user application
A user gains access to their account through an application using API (using OAuth). High priority.

2. Add a New Feed
A user suggests a feed to be made available for adding to sux0r users’ accounts. High priority

3. Approve a Feed for a User
An feed administrator approves a feed added by a user so that it can be added to users’ accounts. High priority

4. Associate feed with a user
A user associates an approved feed with their account. High priority

5. Create a new Vector for a User
A user creates a new classification vector. Medium priority

6. Create a new Category for a User’s Vector
A user creates a new classification category on a specified vector. Medium priority.

7. Train a Document for a User
The user submits a document and the desired classification to train the classifier. High Priority.

Note: The document could be an RSS Item, which already exists in the database and hence will have an RSS ID number, or it could be plain text, which needs to be added to the database and then trained.

8. Return the RSS Items for a User
A user gets all Items from RSS Feeds to which a user is subscribed. Feeds may be sorted or filtered according specified criteria (e.g. only those in a certain category). Very high priority .

9. Return RSS Feeds for All Users
A user gets a list of all the feeds in the database. Medium priority.

10. Return RSS Feeds for a User
A user gets a list of all the feeds they are subscribed to. High priority

11. Remove feed
A user requests to remove a feed (association) from their account. Medium priority

12. Return vectors
A user gets a list of all the vectors she has created. Medium priority

13. Return categories
A user wants to view all the categories they have created for a vector. Medium priority

14. Export the Bayesian Token Analysis for a User
A user gets the information on frequency of occurrence of words in each vector-category.

1 Comment

Filed under dissemination, technical