Graph and ruler from Shutterstock
Filed under: Articles, Content management, Knowledge management
Metrics are a concrete way of defining what a knowledge management or content management project will achieve, and whether it met those goals.
In an environment of tight budgets and high expectations, metrics are an appropriate next step for an industry that prides itself on delivering big benefits.
Defining metrics is not easy, however, and much study and further practical experience will be needed before implementing such measures becomes simple or commonplace.
This article reviews the benefits of metrics, outlines some commonly used measures, and presents some practical tips and tricks.
It is hoped that this article will further stimulate the current discussions on the use of metrics in the knowledge management and content management communities.
Introducing metrics
Metrics, also known as ‘measures’ or ‘key performance indicators’ (KPIs), are simply a tool for assessing the impact of a particular project or activity.
While these are often numeric in nature (“improve sales by 20%”) they can also be qualitative (“improve staff satisfaction levels”).
In either case, metrics provide clear and tangible goals for a project, and criteria for project success.
With the far-reaching impact of both knowledge management (KM) and content management system (CMS) projects, the use of metrics is even more important.
How can you know if your project has succeeded without using metrics?
Benefits of metrics
Using metrics allows:
- Targets to be set
Metrics provide clearly defined goals and scope for projects, allowing for more concrete design, planning and implementation. Metrics state “this is what we plan to do, and this is the benefit it will have”. - Success to be assessed
Metrics provide very specific ‘success criteria’ for projects, allowing the outcomes to be assessed at the end of implementation. - ROI to be estimated
In the current times of tight IT budgets, there is an expectation that projects will deliver quantifiable benefits. This is often defined in terms of ‘return on investment’ (ROI). Without strong metrics, estimating ROI is little more than guesswork. - Ongoing viability to be tracked
Metrics continue to provide value beyond initial implementation. Appropriate measures will quickly highlight issues, allowing them to be resolved before they grow or spread. - Lessons to be learnt
By providing a concrete way of assessing the success (or lack of) various approaches, a greater understanding can be gained. This can then be applied when establishing new initiatives.
In short, metrics can be of tangible benefit both at the early stages of a project, and throughout its life.
Determine business goals
Metrics cannot exist in isolation of business objectives. The first step is therefore to determine the goals for the project. These should focus on business, not technical objectives.
For example, ‘deliver information more effectively’ is not a strong business goal. (What is the expected benefit of delivering this information?) Instead, a better business goal could be ‘Improve the rate of product cross-selling by front-line staff’.
See our article What are the goals of a CMS? for further discussions on setting business goals.
Business metrics are much more effective than “hits and visitors”
Business metrics
The most powerful metrics are those that directly measure the desired business outcomes. The medical community provides a clear example of this.
In a hospital, a content management system may be setup to deliver policy and procedure information to clinical staff. At first, it might seem that the goal would be to “improve the dissemination of information”.
In this case, a much more meaningful goal would be to “reduce adverse outcomes for patients, including patient death”. The CMS can then be seen as assisting in saving lives.
This is a measurable goal, and existing clinical metrics can be used to assess whether it has been met.
Such measures are a clear example of business-focused metrics, and these should be established wherever possible.
While the following sections provide a range of implementation metrics, the use of business metrics should be strongly preferred.
Implementation metrics
These metrics relate to measuring the success of implementing the content management or knowledge management system.
While these are often easy to measure, and thus the focus of many metrics efforts, they can be very limited in their ability to assess the true impact of the project.
This is a selection of implementation metrics:
First ask: “how often should staff be using the system?”
System usage
A measure of how much the new system is being used is often the starting point for implementing metrics. While it is easy to determine, be aware that it only indirectly measures the impact of the system.
There are a number of ways of measuring system usage:
- Web usage statistics
This is the most common form of usage tracking, implemented using commercial web analysis packages. It provides information such as hits, pages, and users.Beware of an over-reliance on this information. For a start, it can be quite misleading, due to the limitation of web protocols.It is also important to determine what level of usage defines project success. For example, a news site probably should be visited each day by the majority of staff. In comparison, policies and procedures may only need to be read a few times each year to have the desired effect. - Search engine usage
Search engine logs can be analysed to produce a range of simple reports, showing usage, and a breakdown of search terms. - Messages sent/posted
This is often used as a measure of success for communication and knowledge sharing systems, such as e-mail, discussion groups, and online forums. - Other knowledge creation measures
If staff are expected to contribute information to the system, this can be directly measured. For example, the number of pages created in a content management system could be an effective metric.The exact nature of the metric will depend on the design of the system or project. - Knowledge use
A more direct measure of many KM projects is whether the information is being used in practice.As usage normally happens outside of the system, it must be reported by the staff. Provide simple mechanism for notifying when information is used, and implement a rewards mechanism to encourage timely reporting.
In addition to measuring the broad growth in usage, metrics can be correlated with specific internal marketing or change management activities. This provides measurable feedback on the success of individual initiatives.
Number of users
Related to system usage is the total number of staff accessing the system. This should clearly grow as the system is rolled out across the organisation.
Identify the desired number of users, and ensure this is realistic. For example, various types of staff may have no use for the information, or do not have access to PCs.
You may also need to implement security login in order to determine accurate staff numbers.
Information quality
This is an important metric for content management systems, or knowledge sharing initiatives. Unfortunately, it can be quite hard to measure in practice.
There are a number of approaches to measuring information quality:
- User rankings
This involves asking the readers themselves to rate the relevance and quality of the information being presented.While it is simple to implement, it is not clear that it will be widely used by staff, without tying it to a matching rewards structure. - User rankings
Subject matter experts or other reviewers can directly assess the quality of material in the content management system, or KM platform. - Edits required
A content management system usually provides some form of workflow capability. Audit trails generated by this can be analysed to determine how many edits or reviews were required for each piece of content.If the original material is of a high quality, it should require little editing. - Usability testing
This provides a practical way of determining whether the information, and the way it is structured, can be understood by end users.Usability testing can be qualitative, with the goal of identifying problems, or quantitative, in which the same tests are run each time and timed. - Links created
A popular page with useful information will be more frequently linked to from other parts of the system. By measuring the number of links, the effectiveness of individual pages can be determined. (Google, for example, uses this to rank its search results.
Staff feedback is an effective metric and source of knowledge
Information currency
This is a measure of how up-to-date the information stored within the system is. The importance of this measure will depend on the nature of the information being published, and how it is used.
The best way to track this is using the metadata stored within the CMS, such as publishing and review dates. By using this, automated reports showing a number of specific measures can be generated:
- average age of pages
- number of pages older than a specific age
- number of pages past their review date
- lists of pages due to be reviewed
- pages to be reviewed, broken down by content owner or business group
Be aware that not all pages will need to be updated frequently. The CMS should allow variable review periods (or dates) to be specified, depending on the nature of the content. For example, pricing information may need to change daily, while policies often change only yearly.
This metric is as much a tool for ongoing site management, as it is a measure of project success.
User feedback
An easy to use feedback mechanism should be established for any system designed to provide knowledge. Use of such a feedback system is a clear indication that staff are using the knowledge.
More importantly, field staff are often the best source of changes and updates, as they are responsible for putting the information into practice.
This can be a difficult to interpret metric. While few feedback messages may indicate the published information is entirely accurate, it is more likely that the system is not being accessed, or that the feedback mechanism is not recognised as useful.
Conversely, while many feedback messages may indicate poor quality information, it does indicate strong staff use. It also shows they have sufficient trust in the system to commit the time needed to send in feedback.
Maintenance costs
Direct cost savings can be realised through the implementation of a CMS or KM project. There are only a few quantifiable ways of directly saving money:
- Reduced staff requirements
Staff who were formally needed to conduct labour-intensive maintenance activities can be reallocated or retrenched. (Harsh, but true.) - Reduced capital requirements
The amount of hardware or software required is reduced by implementing the new system or approach.
A word of warning: too tight a focus on cost savings will limit the strategic value of the project, and may risk a staff backlash or a high level of change resistance.
Staff efficiency
The implementation of new information or knowledge systems can generate efficiency gains across a large number of staff.
Typically, this is calculated by determining the amount of time that the system saves each person in a day. (Often a few minutes, due to improved navigation or searching.)
Multiply this out to a year, and then by the number of users, to get a total time saving. This can then be multiplied by the average staff salary to determine a cost saving.
While this figure can be very impressive (in the millions), it can be hard to realise these paper savings.
Estimates have been made for what percentage of time savings is realised, broken down by position. Use these to determine more accurate metrics
Increased efficiency is only of value if it can be realised
Printing costs
Reducing printing costs is a metric commonly associated with intranets, and can be an effective cost-justification for implementing electronic distribution mechanisms.
Track the amount of printing of internally- or externally-targeted materials, before and after implementation. Use this to determine costs saved.
Typically, this metric is focused solely on centralised printing, such as brochures, manuals, and other support material. Local printing by individual users often proves too hard to track.
Be aware, though, that reducing centralised printing costs may simply redistribute the costs out to individual users, who print out the manuals for themselves (at a greater per-page cost).
Distributed authoring
The extent to which the business as a whole takes responsible for keeping content up to date is a metric in itself.
At the most basic level, the number of CMS or KM system authors can be tracked against a target.
A more rigorous approach uses statistics from the workflow capabilities to determine the level of activity of each author. This can then be used to determine the real usage of the system in each business unit.
Process efficiency, reduced time
By doing process analysis, it is possible to determine the steps needed to complete critical business activities, and the time needed.
If this is measured before the implementation of the new system, the difference in process efficiency can easily be measured.
Implementing a knowledge management system often also reduces the turnaround time for completing tasks, which generates many benefits.
Transaction costs
A process analysis activity can also determine costs involved in completing tasks. This allows direct cost savings made by implementing the new system.
Multiplied out by the number of times the activity is completed in a year, the whole-of-business savings can be determined. These are often substantial in a large organisations.
Implementation metrics | ||
---|---|---|
|
|
|
Customer service metrics
While many content management and knowledge management systems are targeted at internal staff, they can nonetheless have a considerable impact on the organisation’s customers.
A range of metrics can be used to track this:
Product sales
Increasing product sales by implementing a KM or CMS project is often a goal as part of a wider e-commerce initiative.
Measurement is obviously simple, but determining the role played by the knowledge management or content management
project can be more difficult.
Lead conversion
The sale of anything other than simple off-the-shelf products is often the results of considerable pre-sales
effort.
The conversion of a prospective customer (a lead) into a sale depends on being able to clearly convey product strengths, and the ability to paint a favourable and accurate comparison with competitor’s offerings.
Knowledge management has long been viewed as playing a strong role in this process.
By measuring the rate of conversion from leads into actual sales, the success of the KM initiative can be assessed.
Customer satisfaction
This is a key consideration for many service-oriented organisations. Both content management and knowledge management projects can improve customer satisfaction, particularly in areas where advice or support is being given.
Customer satisfaction is best measured using standard market research techniques, such as:
- surveys
- follow-up telephone calls
- focus groups
Consistency of advice
Customer service and advice exposes an organisation to the potential for substantial legal liability. This can be greatly reduced by ensuring the same accurate and consistent advice is given, regardless of who serves the customer.
This can be assessed using a number of methods:
- Call monitoring
Many call centres implement random monitoring, to assess the quality of service provided. - “Double jacking”
This is a term for sitting in with a customer service representative, and listening to a call. This is often done with trainee staff. - Follow-up market research
Standard market research techniques can be used when contacting a customer to determine what advice they were given.
Call handling time
Where complex advice or assistance is being provided, effective information sources can substantially cut down on the time needed to complete a customer call (call handling time).
Modern call centres use advanced telephony systems to record a range of detailed metrics, including call handling time. These systems can provide overall statistics, or drill down to individual groups or staff members.
Transactions processed
This is a measure of overall call centre efficiency, and like call handling time, can be tracked using standard telephony systems.
Support requests
Provision of information such as frequently asked questions is often designed to reduce the number of support requests to the help desk, or other support groups.
This applies to both internally generated (staff) and externally generated (customer) support requests.
Use the help desk’s incident management system to track the number of requests made. Ideally, the management system will break down requests into categories, making it easier to isolate the effect of the KM or CMS project.
Product development cycle
The effective use of information can substantially reduce the “time to market” for new products. Collaboration platforms and other knowledge management initiatives can also be very effective in bringing together the cross-disciplinary skills needed to take a product from concept to implementation.
Implementation metrics | ||
---|---|---|
|
|
|
Cultural metrics
Knowledge management projects may endeavour to change the broader culture within an organisation, to match strategic needs or external factors.
There are a number of metrics that can be used to measure the impact on staff culture:
Success stories, anecdotes
While the use of numerical metrics gives the most tangible results, the value of anecdotal evidence should not be overlooked. This can be one of the most effective ways of demonstrating the benefits of a project, and promoting further cultural change.
Anecdotes and success stories are particularly applicable to knowledge management initiatives, which focus more on knowledge sharing, than implementing a new technology solution.
Use structured storytelling methods to gain the best value out of anecdotal evidence.
Staff morale, satisfaction
Knowledge management projects often focus on cultural issues, such as staff morale, and levels of job satisfaction. In a broader sense, this can also be a goal of implementing a content management system.
Existing staff surveys can be the most effective way of identifying changes in staff morale. By using a control group (see later), the effects of the project can be isolated.
On a smaller scale, stakeholder interviews can elicit feedback on staff satisfaction and morale.
Cultural change
If a knowledge management project aims to provoke a shift in overall organisational culture, this can be measured.
Tools such as tailored staff surveys can determine the opinions of a large number of staff on specific issues or topics.
Staff learning
A knowledge management system can be used to facilitate staff learning, either by providing new opportunities, or making the learning process more efficient.
If a competence-based assessment system is in place, this can be used to quantify the improvements as a result of the knowledge management project.
Cultural metrics | |
---|---|
|
|
Guidelines and tips
This section provides advice and guidance on how to successfully implement metrics as part of your content management or knowledge management project:
Be specific
Metrics should avoid unspecific, generalised targets. For example, defining a metric such as “Improve intranet usage by staff” provide no clearly measurable way of determining success.
Instead, a better-defined metric could be “Improve the average number of intranet hits to 30,000 per day by June”.
Each metric should specify the following information:
- target value
- time frame
- who or what will be measured
- any assumptions
- dependencies on other projects or systems
Determine a baseline
Measures must be taken before a project is initiated, if metrics are to be fully effective. Only then can a clear “before and after” comparison be made.
This can be hard to justify when there is strong pressure to start (and finish) the project rapidly.
By making these pre-project measures, however, it becomes possible to quantify the success of the project the moment the first metrics are assessed. As this is often immediately after implementation, this makes for rapid and tangible feedback, while the issues are still relevant
Build metrics into the design of your system
Automate measures
Wherever possible, build the collection of measures into the design of the system itself, so the metrics become an automatically generated product of normal usage.
By doing this, the burden of implementing and managing metrics is greatly reduced. Meaningful reports must also be created, to avoid being swamped by too many metrics.
Not all metrics are amenable to automated collection, and in practice you will need a mix of both ‘hard’ and ‘soft’ measures.
Measure the right things
Metrics put a sharp focus on the things they measure. By putting a metric in place, management is strongly indicating what they consider to be important.
The danger is that if the wrong metrics are put in place, they will distract from the real issues and goal. At worst, they can entrench undesirable behaviour, or reduce productivity.
The classic example is in call centres. A very common measure is ‘call handling time’, which tracks how long it takes to conclude a customer call.
Is this a good measure? Perhaps it is, but a short-call may not be a goal in itself, if the customer hangs up unsatisfied. If this was the only measure in place, it would probably encourage abrupt, fast-talking and rude customer service representatives (who would have great call handling times).
Less measures, not more
Implementing and tracking metrics can be very burdensome, so it is important to setup only the metrics that will be most valuable.
The danger is that an over-emphasis on gathering metrics can negatively impact on business productivity, counteracting the benefits of the project itself.
There is also a need to analyse the metrics, and make decisions based on them. Too many metrics will simply mean that most will be ignored in practice.
Collect only those metrics you have an immediate use for.
Recognise the impact of metrics on staff
Implementing metrics must be done with care, recognising that the metrics themselves will have an impact on staff.
A poorly-chosen metric may be seen as unreasonable, or even draconian. This may impact on staff morale and motivation, undoing all the potential benefits of the system
Gathering metrics has a very real human cost
Rewards and recognition
In many cases, the metrics used to assess the success of a project are also used as the basis for staff recognition or rewards schemes. In these situations, the metrics are used in a positive way, to highlight the benefits generated by staff.
Care must be taken, however, when establishing this link. If staff perceive that there are concrete benefits to be gained if measures are good, there is a strong incentive for the manipulation of the figures. This is particularly true in the case of monetary reward schemes.
Recent articles have started to suggest that recognition is often more effective than rewards, and is more sustainable over the long-term. This can be both positive recognition, or the highlighting of poor performers.
Metrics and usability
Implementing sound metrics can also be an effective way of identifying usability problems. That is, problems with the design of the system itself.
For example, if measures show that one section of the system is being heavily used, while another is not, this would suggest an investigation of design issues.
In this way, metrics can become ‘usability testing in the field’.
Effect of other activities
One of the challenges in establishing and tracking metrics, is determining what effects were due to the project, and which came from outside changes or activities.
It is rare that the KM or CM project is the only change happening within the organisation. Instead, there is likely to be much occurring, on both small and large scales, all of which may impact on the metrics.
In the end, it may not matter what generated the change, as long as the desired outcome was achieved.
If real learning is to occur, however, it is necessary to be able measure the effect of any one single initiative.
There are a number of mechanisms for doing this:
- Correlate the rate of change and timing of changes with the specific project activities.
- Use a control group (see below).
Your project doesn’t exist in isolation, making measuring metrics more difficult
Use a control group
This is a common method used in many scientific areas. It involves taking two (ideally) identical groups, working with one, while leaving the other unchanged.
By using the two groups, any common factors or external changes can be eliminated, allowing the effect to be easily and accurately compared.
In the context of an organisation, this most often involves implementing a new system for one group or section, while leaving another section to use the earlier platform.
In a large rollout, this may be as simple as comparing the effect on the first team to be deployed to, versus the last team updated. In this way, the benefits of a control group can be obtained without permanently depriving one team of the new system.
Re-evaluate metrics
The chosen metrics should be reassessed every six to twelve months, to determine whether they are still effective and appropriate.
It will often be necessary to drop some metrics, or establish some new measures. This is not a bad thing, and is often just a reflection of the changing business environment.
The goal of metrics is to assess the success of projects, in the best way possible. Achieving an unchanging set of metrics for all time is not a goal in itself.
For this reason, while changing metrics makes it harder to compare one year to another, it is more important to have an effective way of assessing whether projects are still meeting business goals.
Balanced scorecard
The Balanced Scorecard is a relatively recent management methodology that recognises the importance of business aspects other than monetary.
Growing in popularity, it is strongly built on defining metrics for individual projects, and overall goals.
This makes it fit comfortably with the desire to implement metrics for knowledge management and content management projects.
Summary
Setting up metrics for knowledge management and content management projects is not easy, and care must be taken in selecting appropriate measures.
To ensure the best outcomes, follow these steps:
- Determine tangible business goals for the project.
- Identify key business metrics that are directly related to these goals.
- Identifying supporting implementation, customer service and cultural metrics.
- Use best practice approaches, such as measuring a baseline, or using a control group, to ensure that the metrics are effective.
By approaching the use of metrics with a clear understanding of the issues and goals, they can be a powerful way of setting targets, measuring success, and identifying problems as they surface.