Skip to content


What is the worth of a single task?

Published by Vincent Pickering

To quantify such a statement, accepts that differing viewpoints will return a variety of values.

  • An employee may value the expertise earned.
  • A contractor could covert the hourly rate.
  • Users appreciate the new functionality available.
  • To the business, dominant market position, is the tasks worth.

Quantifying a tasks true worth necessitates that a business decide what it values and what it does not.

Work with the business to help them define metrics they are aiming to achieve over a fixed period. Prioritise tasks that match these metrics and help a business to extrapolate the valued benefit they should receive. Use success criteria to track progress and feed back to the business the actual vs. predicted return on a task delivered.

Audit systems regularly. Rationalise each area of the system. An area should have a business goal. In the absence of business goals it should possess user goals. Otherwise delete it.

Extrapolate the same logic to each component on a page. Components should have a monetary return attached to them or possess a need to the user, otherwise delete them.

Success Criteria

“Don’t collect statistics, interpret them.”

Design based on data collection and solid reasoning should be at the heart of any evolving interface. Don’t collect statistics and serve them to the client as raw data, or whittle down the raw data to a few points that you believe may be of interest to the client. This misunderstanding that the client needs or wants to see the raw data is at the heart of many poorly thought out re-design projects or misplaced application evolution paths.

Raw data in of itself lacks the context businesses need to uncover a realistic picture on how the system is performing versus their businesses goals. Spewing forth disparate, disconnected technical details towards a client on topics such as: popular screen resolutions, operating system versions and their clients favourite browser in countries they don’t operate in; is unlikely to yield a constructive conversation. Feedback of this format will often inadvertently result in the client requesting changes to the system that yield negative results.

A client should not need to understand raw data, it is not their field of expertise, nor do they understand the context in which to frame the data. The job of the designer is to work with the client on translating the research data in to a meaningful format that informs the client on impact to their business.

Begin with the business goals of the project, refer to your success criteria agreed with the client previously. One or many goals can be attached to each success objective. Work with the client to agree which goals to track, then identify each statistic that attributes the goal required to track. Now we assign each goal a number of points dependant upon its relevancy to the objectives we wish to achieve. Finally we can see for each success criteria how many overall points we have scored and see at a glance how the site is performing.

It is broken down in this format:

  • Success Criteria
    • Business Goal - Statistic - Statistic - Statistic

Tracking data in this way can be hard for the designer to grasp at first. Let us analyse this in the context of a real world scenario:

Working with a client we break down success criteria like so:

  • Increase Profits By 10%
    • Average Transaction Spend £50 Or More - Time To Complete Checkout Process - Number Of Items Purchased - Price Of Item(s) Purchased - Used Voucher Code?
      • Increase Page Conversion
        • Time Spent On Page
          • Purchased Item Via Landing Page?
          • Shared Item On Social Networks?
          • New Customer or Returning Customer?
      • Increase Number Of Transactions
        • Number Of Repeat Transactions Over 6 Months
          • Used Voucher Code?
          • User Arrived Via Email Newsletter
          • User Arrived Via Search Engine
          • User Arrived Via Social Network Share

We can work with the client to decide upon the context, relevance and weighting of each statistic we are tracking. When a user arrives at the site we begin tracking their “session” and weight points according to what we are tracking.

For example, if the user shares the product page over social networks we could add 1 point to the transaction score, if they shared it over multiple networks the client may value this more and score them 3 points.

There are a vast number of different criteria we can use to score an interaction for success criteria such as:

  • Duration on a page
  • Selecting the primary action
  • Selecting a secondary action
  • Selecting a related item
  • Interacting with related media (video or audio)
  • Arriving on the page via a promotion
  • Returning to the page within a set time period

We define point values, thresholds and bonuses with the client for each statistic. The client can then track the points scores periodically and identify where the site is under performing and what work needs to be carried out.

Tracking in this manner adds context to the data the client is provided with, transactions and actions on the website that traditionally designers may not track or even inform the client of could be highly relevant to the clients business goals giving them an impression the site is under performing when in-fact it is doing very well. Working in this manner builds a strong bond of trust and shared objectives it is always clear what both parties are working towards and all future decisions are able to be taken in the context, rather than “best guess” or future trends.

Ultimately, always measure customer patterns that are sensitive to the client’s business and that have relevance to their core business objectives. Data that is actionable will always outweigh data that only a developer needs to be aware of, unless that information will inhibit the business in some way. At which point, it should take precedence.

Practical Example

In the past I lead a team hired by a major manufacturer in the mobile phone industry who desired to market their new range of phones in the UK. I was tasked with targeting mobile users specifically and the campaigns would be designed to run consecutively throughout the year, with a concept flexible enough to suit differing phone types and tablet devices as they became available on the market. In addition the client wished to track the data from each campaign to measure success, tailoring each subsequent release to improve upon the last.

This was a loose set of requirements that had great potential from a design point of view, but from a business standpoint there are many things that could trip up the project and derail it, such as:

  • Collecting data points and giving them to the client will lead to confusion and potentially push the work in a direction that will yield bad results. The design teams job is to distil the data in to a manner that can fulfil the clients business objectives and help them understand how this can be tracked and improved.
  • Running many campaigns (often several at once) can get complicated, slipping on one release could create a knock on effect for the rest of the year and strain the relationship.
  • The client has to be able to understand and contribute to the project long term, expressing business goals that the design concept must encompass.
  • If the client is not visibly seeing a return on their investment the relationship can become strained and the project cancelled.

Working with the client, success criteria was decided upon, giving the project focus and direction:

  1. The campaigns must raise awareness of the new devices and their capabilities.
  2. The campaigns need to improve the manufacturers public perception in the mobile space.
  3. The campaigns need to show that the new devices are “On Trend” and a viable alternative to other comparable devices.
  4. The campaigns must show that the new devices are superior to other comparable phones in key selling points.

After productive client discussion it was decided that each campaign would be a microsite dedicated to the new device. Users could experience engaging interactions and view marketing material about the device. It would be structured in such a way as to inform the user and be shared with others.

This format would allow the client to decide ahead of time what their business focus was for the device and microsite (e.g., Camera, Screen size etc). This approach ensured that a high percentage of users arriving are the target demographic targeted. Increasing our chance of success.

Traditionally tracking data to improve results and progress is done via counting the number of people who viewed the page(s) or clicked on the banner. This is a bad way to track user data for the client and gives disingenuous results. Because a user viewed a web page does not mean they engaged with it, users click banners by mistake and then leave, which does not mean we achieved any of our success criteria. Working with the client we devised a way to track meaningful engagement and measure progress that was relevant to their business objectives.

When the user performed actions we wished to track we would score them a maximum of 3 points depending upon engagement. This would bestow a point total for each users visit. Points could be totalled and averaged to give a good indication on engagement and exposure to the new device in the microsite and indicate areas where engagement failed, illuminating a path for the next release to focus on improving.

For example:

  • If the user views more than 3 pages in the site, score 1 point.
  • If the user views the video, score 2 points.
  • If the user views the video all the way to the end, score an additional 1 point.
  • If the user plays with the sites interactive widget, score 3 points.
  • If the user spends more than 5 minutes or views all the pages on the microsite, score 1 point.

Notice how we are assigning more points when the user displays higher levels of engagement with the site and smaller values when they perform a passive action. All the types of engagement we are tracking are important, but some less than others, this should be reflected in the feedback to the client.

Now we have a system that:

  • Is flexible enough to adapt with each subsequent release.
  • Target only the users that are relevant.
  • Easily measurable and quantifiable business goals for the client to track and see clear progress and success.
  • A re-usable framework for the design and development teams to work towards.
  • Clear design goalposts to work between with a feedback mechanism built in to understand what aspects of the design and user experience are working and what needs to be improved.

Spend time on your projects considering engagement over impressions, don’t be afraid to focus on a particular segment or demographic and ensure that the solution you have devised never looses sight of the clients business goals or objectives.