Showing posts with label Metrics. Show all posts
Showing posts with label Metrics. Show all posts

Thursday, November 15, 2018

Measuring Information Resource Value, Part 4: Users

We saw in previous entries in the MIRV series that you need two types of metrics: quantitative and qualitative.  Users are applicable to both sets of metrics: you need users to measure usage, and you need to know who to survey to determine the business impact of a service.

But before you can run your quantitative and qualitative analyses, you need to define what actually constitutes a user.

Wait - is this even controversial? You have no idea. (Actually, you probably do). Let's take a look at the problem.

Broadly, there are two different categories of agreements - Per Seat and Group.

The Per Seat User
Among the dimensions that are in constant tension between you and your vendors is how you define a user.

Back in the day, you would buy a subscription to Horizon Research that had 15 seats. Fifteen seats = 15 users. Right?

Nominally, that's true. And you can bet the vendor would love the discussion to end there. But you don't want it to -- at least in the context of determining value. You need more detail about how a service is being used.

How you define a user can, and should, be scoped by each user's usage.  (I will discuss usage metrics in Part 5 of the MIRV series at length). 

In the above example: of the 15 users, only 10 logged in more than once per month. And of those ten, only 8 used viewed on average more than a page per login. In fact, the usage distribution for Horizon in your company is Pareto distributed: 80% of usage is from just 3 users.

So you've got 3 users who've never logged in. You've got another 2 who logged in only 4 times all year. So you could argue only 10 users really count for your ROI analysis. But worth noting: 3 of these ten account for almost 80% of usage!

So how do you define User here: 3, 10, 12, or 15? My money is on 10. Why not 3? Because you don't know the business impact of the usage from your active-but-limited 7 users. You need to consider the qualitative metrics I discuss in Part 3 of the MIRV series.

In other words: The definition of user is inextricably linked to usage. You may have seats that are extraneous to your value analysis. Be ruthless in disqualifying your casual or inactive users in order to come up with a meaningful footprint for the resource. And then you can proceed to assess the business impact (i.e. qualitative metric analysis).

The Group User
Nominally defining the number of users for a per seat license is pretty simple. Even netting out casual or inactive users, while subject to debate, is easy to define.

But what about when you have a group license? What if you're licensing a web app or even a data feed that doesn't come with a fixed number of subscribers, seats, users, etc.?  Typically, a vendor will try to define your group as broadly as possible, including as many potential users as possible.

You will want the opposite: you'll want to define the user base as narrowly as possible because you want the license (and the cost) to reflect the actual value of the service to the firm.

For example: Horizon is offering you enterprise access to their syndicated research for $120,000 a year. Awesome! Especially when you consider they were charging $42,000 for just 6 licenses. As the vendor explained to you: we're giving you access for all 60 people in your firm for just $2,000 per year instead of $7,000 for the 6-seat license.

But hold on - the vendor is arguing that everyone in your company is a user. This doesn't reflect the reality of how the service will be used, and by whom. Consider the many facets you should apply to your user base to come up with a more realistic denominator to define User.

- 40 of 60 people in your company are revenue generating (P&L)
- 30 of the 40 revenue-generating employees are in the division that needs the service (Division);
- 10 of the 30 users in your division are senior staff who won't be doing research (Rank);
- 15 of the twenty junior staffers work in consulting, and 5 are in sales, who don't need the service (Role)

That takes your Actual User base down to 15 people within a firm of 60. That's a nominal per-seat cost of $8,000 per Actual User.

(I talk at length about enterprise versus per seat decision-making here.)

Whatever number you decide to use, your next step in defining users in a group license is to apply the Per Seat user instructions I offer above: you now need to look at usage. Usage stats almost always whittle the number down further - thus increasing the per seat cost. In this example, of the 15 Actual Users - 5 of those 15 may have never logged in, leaving 10 users and increasing the per seat cost to $12K!

Whether you're dealing with a per-seat license or a group license, you need to define user thoughtfully. Critical to that process is Usage, the subject of Part 5.

- Kevan Huston

Sunday, November 11, 2018

Measuring Information Resource Value, Part 3: Qualitative Metrics

In the second part of the Measuring Information Resource Value series, I laid out the case for gathering quantitative data inputs toward an ROI decision. Now we must look at what I call qualitative metrics.

Where quantitative metrics measure processes, qualitative metrics measure outcomes. I typically divide Qualitative metrics into three categories:

  1. Verbatims - these are written testimonials of the value of a service. They can be solicited as part of a formal feedback process, or unsolicited when users simply tell you how much they like a service. 
  2. User Surveys - these are carefully designed, methodologically sound instruments you issue to users or a sample of users that gather feedback in a way that can be aggregated, compared and reported on.
  3. Productivity Studies - a blend of quant and qual, these metrics assess the time-use and user behavior associated with a resource. These can be survey-based or gathered via network, server or key-stroke means.

I have always argued that information professionals should endeavor to gather measurable (and reportable) data in qualitative analyses. While feedback such as verbatims and anecdotes can be powerful ammunition in support of a purchase, these on their own are insufficient. The reality is that most senior managers respond best to numeric insight.

So how do you assign numeric values to qualitative data? What you're looking for, beyond verbatims, is a way to systematically measure business impact. A user verbatim may read: "this source is tremendously helpful. It has saved me countless hours and has helped us land three deals we wouldn't otherwise have gotten".

How do you record something like this in a way that can be compared, aggregated, analyzed and visualized?

You need to conduct a user survey. This doesn't have to be hard - you're not trying to recreate a Gallup poll or comScore panel! Here's what I recommend.

As part of your renewal process, survey a sample of end users about the product's value. Your survey questions should be laser-focused on outcomes. Don't worry about usage or process stats - you're collecting that already! Some example survey questions:
  1. On a scale of 1 to 5, with 5 being the highest, how important would you rate Horizon Research to your job? (Scale)
  2. On a scale of 1 to 5, with 5 being the highest, how valuable is Horizon Research to the work product you deliver to clients? (Scale)
  3. Your subscription to Horizon costs $12,000 per year. At that price, do you think it is worth renewing this service? (Y/N)
  4. Have you landed deals or won business because you have access to Horizon Research? (Y/N)
  5. On a scale of 1 to 5, with 5 being the highest, how much more productive are you because you have access to Horizon Research (Scale)
  6. On a scale of 1 to 5, with 5 being the highest, how satisfied are you with the quality of the data you get from Horizon? (Scale)
Remember, you're focused on two things with these surveys: business impact and measurable data. Scale and Y/N questions like the above measure impact and be easily aggregated and reported on.

There are limitations to surveys like these of course. They are user reported, and people tend to over-report the value of a resource. Best example: Bloomberg Terminals. Ask any owner of a Bloomberg any of the above questions and the answers will be all 5s and Yes, Yes, Yes. You need to cross reference this feedback with quantitative metrics: when you look at the actual usage data, you see a very different story - sporadic logins, limited usage, and content that's cheaply available elsewhere.

You can also verify survey data with other qualitative inputs. Trust, but verify, is the name of the game. A time use productivity study is a great way to do this.

Suppose you're considering whether to renew a service that helps your employees pull and share regulatory filings. The verbatims you've gathered suggest the main value people get from it is time savings. Upon further investigation, you learn that with the service, users are spending 10 minutes/day pulling filings, and 5 minutes/day sharing the filings through the product. A similar, free service on the web has them spending 20 minutes retrieving and 10 minutes sharing filings.

The mean hourly pay for the user base is $60. There are 20 users.

Under the free service, users are spending $20 in time retrieving info and $10 sharing it. That's $400/day in retrieval and $200/day in sharing. $600 a day on this one workflow for your team!

With the paid service, the time spent and thus the cost, is half that: your team is spending $200/day on retrieval and $100 on sharing, or $300/day on this workflow.

Added up over a 200 day work year, your team spends $120,000 in time for this workflow with the free configuration, but only $60,000 in time annually with the paid service.  This is valuable data to complement your verbatims and survey data.

Qualitative inputs are critical inputs for your ROI analyses. Where quantitative usage data measures processes, qualitative inputs like verbatims, surveys, and time use studies are used to show outcomes and business impact.

- Kevan Huston

Wednesday, November 7, 2018

Measuring Information Resource Value, Part 2: Quantitative Metrics

In Part 1 of the Measuring Information Resource Usage (MIRV) series I explained why it's a bad idea to rely on vendor-supplied usage and user data as the basis for measuring the value (i.e. ROI) of a resource.

We now need to review the two categories of metrics you will need to use to determine the value of a resource, both of which you can and should gather internally.

There are two broad categories of metrics: Quantitative and Qualitative. As a rule, Quantitative metrics capture processes, and Qualitative metrics capture impact.

In this post I review Quantitative metrics.

Quantitative metrics are helpful because, as numeric data, they can easily be converted into analyses that communicate the reach, cost and usage of a resource.

Quantitative metrics are typically of three kinds: 

Reach Metrics
Reach metrics show the footprint of a resource (or its data) within your organization. This is as simple as the number of users for a product: YourCo subscribes to Horizon Research and has 5 licenses to access their website. The licenses are assigned to 5 different people and the seats are non-transferable, i.e. they can’t be shared (nor can the content obtained via these seats).

Cost Metrics
Cost metrics are simply the cost to subscribe and own a resource. YourCo’s Horizon Research package costs $60,000 per year. Maintenance and ownership costs are de minimis, as the content is hosted on the Horizon Research website. Total Cost is $60K.

At this point you can already come up with some modestly useful analyses:

Horizon Research cost: $60,000 per year. Number of users: 5.

Cost per user: $12,000/yr. or $1,000 per month.

Seats per Employee: You have 100 employees, so there’s 1 seat for every 20 employees, and the content and licenses can’t be shared internally.

These are very simple metrics, and easily calculated. But they tell us nothing about what a user is using, how he is using it, and to what effect.

Usage Metrics
Here’s where things get fun, and where you can start determining what and how a subscriber is using a given resource. Things like page views, log-ins, time spent, downloads. 

Let's apply some of these to YourCo's Horizon Research subscription.











For our users, we can see there's wide variance in usage and in how they use the product. Joan, clearly, is a casual user with only a handful of logins. Meanwhile, Charlie is a true power user, averaging almost 2 logins a day, and tens of thousands of page views and downloads.

To provide additional analysis you can create ratios from these stats for additional insight into how people are using a resource:









The numbers tell you a slightly different story than the raw data. David, who only logged in 30 times over 9 active months, viewed far more pages per login than any other user - something you might miss just using the raw data in Table 1. We also see that casual user Joan views almost as many pages as Charlie, despite the fact that she rarely uses the service. 

Putting it All Together

The real fun begins when you combine Reach, Cost and Usage metrics in some really clever and 
creative ways: 



Here we can the see economics of each user's activity. It's quite clear from a pure quantitative basis that Joan is paying a lot for this service relative to her usage. The other users' cost ratios are quite similar and the unit costs are low (3 of 5 users have cost / page view under a buck). An observer could easily be convinced that the information service manager should see about reassigning Joan's seat, or upon renewal, simply get 4 seats instead of 5. 

This would be a mistake. While these quantitative metrics are valuable, they don't tell the whole story. 

We have lots of analysis here on what's being used, by whom, when, and how, and what the nominal cost associated with this usage is, but we don't know to what effect. We don't know the business impact of this usage. In other words: we don't have enough information to determine the actual ROI of this resource for each user. 

What are the metrics that will help you with that? That's where Qualitative metrics come into play, which will be addressed in Part 3.  

- Kevan Huston