Performance Widget

Fast, secure, accessible websites are used by more users than slower, less secure, inaccessible websites. With this in mind, improving accessibility, performance and security of FT products is something that every stakeholder/product-owner can agree is a good goal to aim for. However, if you view the backlog of work of any user-facing FT website you will see the prioritised work is often not on these properties but is on the more visible aspects such as design, layout, content and functionality. How can we make these, less visible, properties become more likely to be prioritised in our backlogs of work? We propose that by revealing these “invisible” features in a widget directly onto the pages viewed by the stakeholders/product-owners will help improve these features in our products and thereby help increase viewership and retention of users. We implemented the widget using a Chrome Extension, and have got a set of stakeholders within the company to install it so we can collect the data to draw a conclusion to the project.

a mix of good and bad news

Documentation on how members of staff can take part in the project can be found here.
Source code for the project can be found here.

Insights is a Chrome extension which will run a set of synthetic tests on any page visited by the user and place the results from these tests inside a widget on the page written in a contextualised manner to make the results easier to ingest.

Measuring success of the experiment

Measuring the success of this experiment will be difficult because other teams may already be looking into these properties (such as Next.ft.com looking into performance, Origami.ft.com at accessibility and the security team at, well, security). Because of the type of outcome we are wanting, i.e. work in these areas to be prioritised, it is difficult to establish if this work is prioritised. Because of this we decided to use proxy metrics and a bit of surveying. The proxy metrics are gathered on every website visited by the user during the experiment. We archive all the metrics, and at the end of the experiment graph them and see if each or any have improved

Technical Details

Why we chose a Chrome Extension

One of the NFRs (non-functional requirements) for the application was to make the application simple to deploy and require no day-to-day effort from the users. We had previously discussed three implementation approaches:

  • Origami component added to all the websites involved in the test.
  • Bookmarklet installed on user’s machines.
  • Chrome extension installed on user’s machines.

Creating an Origami component to be installed on all websites involved in the test would involve co-ordinating efforts with the development teams of those websites and would have also limited our experiment to websites which use Origami and are still under active development.

Installing a bookmarklet on users’ machines would mean that the application can not run automatically, instead requiring the user to manually click the bookmarklet when they were interested in viewing the insights. On top of this, deploying a new version of the bookmarklet would involve each user having to update their bookmarklet. This does not meet our NFR of no day-to-day effort from the users.

A Chrome extension on users’ machines meets all our NFRs, deploying to the internal Chrome Webstore is a very simple procedure that can be automated. Installing the Chrome Extension on our users’ machines is also a very simple, automatable procedure. The extension can also be configured to run on all the FT-owned websites, where we wanted to gather insights.

How we gather the insights for a website

We chose to use synthetic (aka automated) testing as the means for gathering our insights because of concerns about load-handling of live testing. We created a server which has a plugin system for insight providers, making it simple to add new insight providers in the future if the need arises. We have one insight provider for each of the properties we are measuring:

Once the insights for a website have been gathered, we store them in a database for historical purposes and to aid in drawing a conclusion against the hypothesis of this experiment. The latest insights for a website are also stored in an in-memory cache to help decrease the response time of the server to API requests.

How we decide whether an insight is worth highlighting

In order to help the user decide whether an insight’s result returned by our server is of concern , we place a green tick or a red triangle icon next to each result. The server has a set of predefined range of values for each insight, if an insights value is within this range it is not deemed worth highlighting, if it is outside this range, it is highlighted. An example of this would be the PageSpeed Insights score of a website, we set the range for this insight to be between 60 & 100.

Testing personalised/authenticated websites

We chose to use synthetic testing to gather our data to minimise the testing load. One of the issues with synthetic testing is the fact that the testing provider visits the website with their own browser, meaning websites with content personalised for the user would be different on the user’s device compared to the testing provider’s. This is something unavoidable on the web as of today. Another difficulty is websites which require being authenticated. Most insight providers do not provide a way to authenticate websites on their testing devices. One such provider which does is WebPageTest, who have an API to do just that. Because the majority do not, we decided to keep our project simple and would not run any insight providers on websites behind authentication.