Friday, November 30, 2018

Week in Review, 12/1/2018


IHS Markit announced a new service that administers FIX connectivity for asset managers. The new offering enhances the managed service version of thinkFolio, the order management and portfolio modeling solution from IHS Markit. By managing FIX connectivity on customers’ behalf, IHS Markit helps firms reduce cost and the technical complexity of self-administering a FIX network. 11/27/2018.

China will allow overseas credit information firms to participate in a nationwide credit information monitoring system that the central bank is building, according to the China Daily newspaper, citing a senior central bank official. US credit information servicer Dun & Bradstreet had registered with the central bank in 2017 to be a part of the scheme. 11/25/2018.

Corporate Restructuring

Refinitiv, a financial data and trading company, is planning to cut about 2,000 jobs after its takeover by private-equity firm Blackstone Group LP. The group is looking to strip out $650 million of costs in a bid to reshape and compete better with rival Bloomberg LP amid an increasing demand for data from banks and fund managers. 11/28/2018.


Chris Stibbs has stepped down as chief executive of The Economist after five years. The succession process could take between six and 12 months. Stibbs will remain in his role until The Economist Group’s board appoints a successor. Stibbs’ departure follows several high-level exits at The Economist this year, including chief technology officer Michael Brincat, chief financial officer Toby Burton, and group president Paul Rossi. 11/27/2018.

Bob Sauerberg will depart as CEO of Condé Nast, which plans to merge U.S. and international operations and has launched a search for a new chief exec to run the company as a global media entity. Sauerberg will continue as CEO of Condé Nast U.S. until the new global chief executive is on board. He will continue to represent Advance (Condé Nast’s parent company) on the board of Reddit, in which Advance holds a minority stake. 11/27/2018.

Stephen C. Daffron has agreed to join Dun & Bradstreet as President. Mr. Daffron will bring decades of senior leadership experience, operational excellence and a deep knowledge of financial services technology, data and analytics to the Company.

The "Preliminary Discussions" Gambit.

Procurement best practices argue that for the most part you want a competitive bidding environment when you're looking at procuring a product or service. It's extremely rare that there aren't at least 2 major players to choose from for any class or category of information service. Consider:

  • Financial Analytics platforms: Bloomberg, FactSet, S&P Capital IQ, Refinitiv Eikon
  • News Providers: Lexis-Nexis, Dow Jones Factiva, Acuris, TheDeal
  • IT Market Research: Forrester, Gartner, IDC, IHS, Ovum
  • Audience Measurement: comScore, Nielsen, Quantcast, SimilarWeb
  • Oil & Gas: Drilling Info, IHS, Wood Mackenzie
  • Private Markets: Preqin, Pitchbook, Datafox, CB Insights

and so forth.

In each case, you'd want to create a competitive process to get the best price. If, for example, Factiva thinks you may go with Lexis-Nexis, you're likely to get a far better deal than if they think they're the only outfit under consideration.

But sometimes, you can leverage an early non-competitive discussion into a great deal - if you're willing to go with a vendor without running a full vetting process.

There are risks, obviously. You're proceeding with a service without conducting a detailed comparison of competing products. You also run the risk that you may overpay.

But if the circumstances permit, you can try what I like to call the Preliminary Discussions Gambit (PDG).

Here are the necessary preconditions for a successful PDG:

  1. The vendor is a disruptor to the space and challenging entrenched incumbents;
  2. The vendor is flexible on price;
  3. The vendor doesn't have a lot of customers yet, but have a fully commercialized and usable product.
  4. The vendor is often PE backed and looking for customer growth and not particularly worried about maximizing revenue (i.e. they have cash to burn);
  5. The product you're buying isn't business critical and is a low risk acquisition - the cost of failure is low;
  6. You're already somewhat familiar with the products and vendors in the category.
To execute a successful PDG, tell the vendor you're just in "preliminary discussions". You haven't begun speaking with any other firms -- typically the incumbents they're trying to take business from.  You're almost always going to get a better deal if the vendor reaches out to you first, but it's fine if you reach out to them. 

Many times, a vendor will try to preempt your vetting process and try to land a deal before you've started. That's ok! In fact, use it to your advantage. You probably won't have more leverage than at any other time in the negotiating process - take advantage of it. You might even make the first offer - and be quite aggressive about it. You may be surprised - the vendor may go for it. 

It's rare that you will take a product from a vendor without undertaking a thorough review of the landscape. But occasionally, the circumstance are ripe for the Preliminary Discussions Gambit. Don't hesitate to try it - as long as you know the risks.

- Kevan Huston

Rumor Mill: BC Partners Looking for Acuris Exit

Acuris, a news publisher for the corporate finance sector, may soon be up for sale. BC Partners are reportedly looking at an exit:
The sources said BC Partners was looking to start an auction process early next year and was working closely with the management team to prepare the financial documentation for potential bidders.

Acuris is a corporate and organizational rebrand of MergerMarket, which now includes several other businesses, including DebtWire, Unquote, Xtract, among others. Acuris most recently purchased Spark Spread, a utilities information provider, and plans to roll it into its Inframation product.

Singapore wealth fund GIC owns a minority stake.

- Kevan Huston

Thursday, November 29, 2018

Index Industry Market Share Concentration: The asset management industry responds

A stat in a recent FT article on Index market share really jumped out at me:
The big three index providers, S&P Dow Jones, MSCI and FTSE Russell, have increased their dominance in recent years. Their combined market share has risen to nearly 80 per cent from about two-thirds in 2012, according to Burton-Taylor.
A 20% increase in market concentration over 5 years is remarkable. Yet there are signs that the asset management industry has had enough. Enter self-indexing, from the giant investment manager Fidelity. Writes Institutional Investor:
Fidelity says it is the first to offer zero expense ratio self-indexed funds to consumers. The move is significant as passive-fund providers race to cut costs and capture market share of retail investors.
Fidelity aren't alone. Schwab, BlackRock, State Street and Invesco have all followed suit.

Still, indexes are likely to remain a huge part of the market data industry. Only the largest investment managers could afford to offer fee-free funds, or subsidize a self-indexing business. And only the strongest, most trustworthy brands - Fidelity, BlackRock and the like - could pull of self-indexing.

There are risks to transparency. Will investors be able to audit an asset manager's index structure and methodology? Are the savings in fees large enough to offset the reassurance investors get from owning a fund that's benchmarked to an independent and (ostensibly) trustworthy third party like S&P or MSCI?

The Index industry, for its part, highlights the regulatory and data management complexities of the business:
However, assuming the role of an index provider is something that should not be taken lightly. Running global indexes in real-time within a regulated environment is not as easy as it looks. Do you have the right mix of people with index and data expertise? Are you willing to invest in data, infrastructure and governance? Are you able to achieve a desirable scale and profit margin to deliver cost savings to investors? What is your risk appetite related to brand and investor impact?
Let's see what happens. The Index industry will likely remain robust and sizable for the foreseeable future. Ever more asset classes and asset variants are available to investors and benchmarks are needed. As trading commissions continue to shrink, fund variety should expand, along with indices.

New Thought Leadership and Market Research, November 2018

Each month we list select research, thought leadership and analysis that may be of interest to market data and information management professionals.

Banking on the Cloud, Quinlan Associates, August 2018

Information Industry Outlook, Outsell, October 2018 (purchase required)

Exchange Global Market Share & Segment Sizing, Burton-Taylor Consulting, October, 2018 (purchase required)

RiskTech100 2019, Chartis Research, November, 2018

Thursday, November 15, 2018

Measuring Information Resource Value, Part 4: Users

We saw in previous entries in the MIRV series that you need two types of metrics: quantitative and qualitative.  Users are applicable to both sets of metrics: you need users to measure usage, and you need to know who to survey to determine the business impact of a service.

But before you can run your quantitative and qualitative analyses, you need to define what actually constitutes a user.

Wait - is this even controversial? You have no idea. (Actually, you probably do). Let's take a look at the problem.

Broadly, there are two different categories of agreements - Per Seat and Group.

The Per Seat User
Among the dimensions that are in constant tension between you and your vendors is how you define a user.

Back in the day, you would buy a subscription to Horizon Research that had 15 seats. Fifteen seats = 15 users. Right?

Nominally, that's true. And you can bet the vendor would love the discussion to end there. But you don't want it to -- at least in the context of determining value. You need more detail about how a service is being used.

How you define a user can, and should, be scoped by each user's usage.  (I will discuss usage metrics in Part 5 of the MIRV series at length). 

In the above example: of the 15 users, only 10 logged in more than once per month. And of those ten, only 8 used viewed on average more than a page per login. In fact, the usage distribution for Horizon in your company is Pareto distributed: 80% of usage is from just 3 users.

So you've got 3 users who've never logged in. You've got another 2 who logged in only 4 times all year. So you could argue only 10 users really count for your ROI analysis. But worth noting: 3 of these ten account for almost 80% of usage!

So how do you define User here: 3, 10, 12, or 15? My money is on 10. Why not 3? Because you don't know the business impact of the usage from your active-but-limited 7 users. You need to consider the qualitative metrics I discuss in Part 3 of the MIRV series.

In other words: The definition of user is inextricably linked to usage. You may have seats that are extraneous to your value analysis. Be ruthless in disqualifying your casual or inactive users in order to come up with a meaningful footprint for the resource. And then you can proceed to assess the business impact (i.e. qualitative metric analysis).

The Group User
Nominally defining the number of users for a per seat license is pretty simple. Even netting out casual or inactive users, while subject to debate, is easy to define.

But what about when you have a group license? What if you're licensing a web app or even a data feed that doesn't come with a fixed number of subscribers, seats, users, etc.?  Typically, a vendor will try to define your group as broadly as possible, including as many potential users as possible.

You will want the opposite: you'll want to define the user base as narrowly as possible because you want the license (and the cost) to reflect the actual value of the service to the firm.

For example: Horizon is offering you enterprise access to their syndicated research for $120,000 a year. Awesome! Especially when you consider they were charging $42,000 for just 6 licenses. As the vendor explained to you: we're giving you access for all 60 people in your firm for just $2,000 per year instead of $7,000 for the 6-seat license.

But hold on - the vendor is arguing that everyone in your company is a user. This doesn't reflect the reality of how the service will be used, and by whom. Consider the many facets you should apply to your user base to come up with a more realistic denominator to define User.

- 40 of 60 people in your company are revenue generating (P&L)
- 30 of the 40 revenue-generating employees are in the division that needs the service (Division);
- 10 of the 30 users in your division are senior staff who won't be doing research (Rank);
- 15 of the twenty junior staffers work in consulting, and 5 are in sales, who don't need the service (Role)

That takes your Actual User base down to 15 people within a firm of 60. That's a nominal per-seat cost of $8,000 per Actual User.

(I talk at length about enterprise versus per seat decision-making here.)

Whatever number you decide to use, your next step in defining users in a group license is to apply the Per Seat user instructions I offer above: you now need to look at usage. Usage stats almost always whittle the number down further - thus increasing the per seat cost. In this example, of the 15 Actual Users - 5 of those 15 may have never logged in, leaving 10 users and increasing the per seat cost to $12K!

Whether you're dealing with a per-seat license or a group license, you need to define user thoughtfully. Critical to that process is Usage, the subject of Part 5.

- Kevan Huston

Sunday, November 11, 2018

Measuring Information Resource Value, Part 3: Qualitative Metrics

In the second part of the Measuring Information Resource Value series, I laid out the case for gathering quantitative data inputs toward an ROI decision. Now we must look at what I call qualitative metrics.

Where quantitative metrics measure processes, qualitative metrics measure outcomes. I typically divide Qualitative metrics into three categories:

  1. Verbatims - these are written testimonials of the value of a service. They can be solicited as part of a formal feedback process, or unsolicited when users simply tell you how much they like a service. 
  2. User Surveys - these are carefully designed, methodologically sound instruments you issue to users or a sample of users that gather feedback in a way that can be aggregated, compared and reported on.
  3. Productivity Studies - a blend of quant and qual, these metrics assess the time-use and user behavior associated with a resource. These can be survey-based or gathered via network, server or key-stroke means.

I have always argued that information professionals should endeavor to gather measurable (and reportable) data in qualitative analyses. While feedback such as verbatims and anecdotes can be powerful ammunition in support of a purchase, these on their own are insufficient. The reality is that most senior managers respond best to numeric insight.

So how do you assign numeric values to qualitative data? What you're looking for, beyond verbatims, is a way to systematically measure business impact. A user verbatim may read: "this source is tremendously helpful. It has saved me countless hours and has helped us land three deals we wouldn't otherwise have gotten".

How do you record something like this in a way that can be compared, aggregated, analyzed and visualized?

You need to conduct a user survey. This doesn't have to be hard - you're not trying to recreate a Gallup poll or comScore panel! Here's what I recommend.

As part of your renewal process, survey a sample of end users about the product's value. Your survey questions should be laser-focused on outcomes. Don't worry about usage or process stats - you're collecting that already! Some example survey questions:
  1. On a scale of 1 to 5, with 5 being the highest, how important would you rate Horizon Research to your job? (Scale)
  2. On a scale of 1 to 5, with 5 being the highest, how valuable is Horizon Research to the work product you deliver to clients? (Scale)
  3. Your subscription to Horizon costs $12,000 per year. At that price, do you think it is worth renewing this service? (Y/N)
  4. Have you landed deals or won business because you have access to Horizon Research? (Y/N)
  5. On a scale of 1 to 5, with 5 being the highest, how much more productive are you because you have access to Horizon Research (Scale)
  6. On a scale of 1 to 5, with 5 being the highest, how satisfied are you with the quality of the data you get from Horizon? (Scale)
Remember, you're focused on two things with these surveys: business impact and measurable data. Scale and Y/N questions like the above measure impact and be easily aggregated and reported on.

There are limitations to surveys like these of course. They are user reported, and people tend to over-report the value of a resource. Best example: Bloomberg Terminals. Ask any owner of a Bloomberg any of the above questions and the answers will be all 5s and Yes, Yes, Yes. You need to cross reference this feedback with quantitative metrics: when you look at the actual usage data, you see a very different story - sporadic logins, limited usage, and content that's cheaply available elsewhere.

You can also verify survey data with other qualitative inputs. Trust, but verify, is the name of the game. A time use productivity study is a great way to do this.

Suppose you're considering whether to renew a service that helps your employees pull and share regulatory filings. The verbatims you've gathered suggest the main value people get from it is time savings. Upon further investigation, you learn that with the service, users are spending 10 minutes/day pulling filings, and 5 minutes/day sharing the filings through the product. A similar, free service on the web has them spending 20 minutes retrieving and 10 minutes sharing filings.

The mean hourly pay for the user base is $60. There are 20 users.

Under the free service, users are spending $20 in time retrieving info and $10 sharing it. That's $400/day in retrieval and $200/day in sharing. $600 a day on this one workflow for your team!

With the paid service, the time spent and thus the cost, is half that: your team is spending $200/day on retrieval and $100 on sharing, or $300/day on this workflow.

Added up over a 200 day work year, your team spends $120,000 in time for this workflow with the free configuration, but only $60,000 in time annually with the paid service.  This is valuable data to complement your verbatims and survey data.

Qualitative inputs are critical inputs for your ROI analyses. Where quantitative usage data measures processes, qualitative inputs like verbatims, surveys, and time use studies are used to show outcomes and business impact.

- Kevan Huston

Friday, November 9, 2018

Week in Review - 11/10/2018


A new study from Element 22 finds that while large asset managers are investing in big data analytics and alternative data, it’s a fraught process. Those who are most aggressively pursuing advanced analytics and alternative data strategies are investing between 2-3% of annual revenues. 11/7/2018.

Private Equity

Pamlico Capital has agreed to make an investment in TRG Screen, a provider of enterprise subscription management solutions. No financial terms were disclosed. As a result of the transaction, Polaris Partners will exit its stake in TRG Screen. SunTrust Robinson Humphrey provided financial advice to Polaris Partners and TRG Screen. 11/7/2018.


Thomson Reuters reported results for the third quarter ended September 30, 2018. The company also reaffirmed and updated part of its Outlook for 2018 as previously provided in May 2018. “Our third-quarter results continued to build on a solid first half,” said Jim Smith, president and chief executive officer of Thomson Reuters. “Accelerating sales momentum and strong recurring revenue growth delivered our best top-line performance in more than two years.” 11/6/2018.

Qualtrics sets an $18 to $21 price range for its IPO, which is about twice its original goal of raising $200M. The range could also double the $2.5B valuation the startup had at its last private funding round. In the first six months of this year, Qualtrics reported a loss of $3.4M on $184.2M in revenue.  11/5/2018

Wednesday, November 7, 2018

Measuring Information Resource Value, Part 2: Quantitative Metrics

In Part 1 of the Measuring Information Resource Usage (MIRV) series I explained why it's a bad idea to rely on vendor-supplied usage and user data as the basis for measuring the value (i.e. ROI) of a resource.

We now need to review the two categories of metrics you will need to use to determine the value of a resource, both of which you can and should gather internally.

There are two broad categories of metrics: Quantitative and Qualitative. As a rule, Quantitative metrics capture processes, and Qualitative metrics capture impact.

In this post I review Quantitative metrics.

Quantitative metrics are helpful because, as numeric data, they can easily be converted into analyses that communicate the reach, cost and usage of a resource.

Quantitative metrics are typically of three kinds: 

Reach Metrics
Reach metrics show the footprint of a resource (or its data) within your organization. This is as simple as the number of users for a product: YourCo subscribes to Horizon Research and has 5 licenses to access their website. The licenses are assigned to 5 different people and the seats are non-transferable, i.e. they can’t be shared (nor can the content obtained via these seats).

Cost Metrics
Cost metrics are simply the cost to subscribe and own a resource. YourCo’s Horizon Research package costs $60,000 per year. Maintenance and ownership costs are de minimis, as the content is hosted on the Horizon Research website. Total Cost is $60K.

At this point you can already come up with some modestly useful analyses:

Horizon Research cost: $60,000 per year. Number of users: 5.

Cost per user: $12,000/yr. or $1,000 per month.

Seats per Employee: You have 100 employees, so there’s 1 seat for every 20 employees, and the content and licenses can’t be shared internally.

These are very simple metrics, and easily calculated. But they tell us nothing about what a user is using, how he is using it, and to what effect.

Usage Metrics
Here’s where things get fun, and where you can start determining what and how a subscriber is using a given resource. Things like page views, log-ins, time spent, downloads. 

Let's apply some of these to YourCo's Horizon Research subscription.

For our users, we can see there's wide variance in usage and in how they use the product. Joan, clearly, is a casual user with only a handful of logins. Meanwhile, Charlie is a true power user, averaging almost 2 logins a day, and tens of thousands of page views and downloads.

To provide additional analysis you can create ratios from these stats for additional insight into how people are using a resource:

The numbers tell you a slightly different story than the raw data. David, who only logged in 30 times over 9 active months, viewed far more pages per login than any other user - something you might miss just using the raw data in Table 1. We also see that casual user Joan views almost as many pages as Charlie, despite the fact that she rarely uses the service. 

Putting it All Together

The real fun begins when you combine Reach, Cost and Usage metrics in some really clever and 
creative ways: 

Here we can the see economics of each user's activity. It's quite clear from a pure quantitative basis that Joan is paying a lot for this service relative to her usage. The other users' cost ratios are quite similar and the unit costs are low (3 of 5 users have cost / page view under a buck). An observer could easily be convinced that the information service manager should see about reassigning Joan's seat, or upon renewal, simply get 4 seats instead of 5. 

This would be a mistake. While these quantitative metrics are valuable, they don't tell the whole story. 

We have lots of analysis here on what's being used, by whom, when, and how, and what the nominal cost associated with this usage is, but we don't know to what effect. We don't know the business impact of this usage. In other words: we don't have enough information to determine the actual ROI of this resource for each user. 

What are the metrics that will help you with that? That's where Qualitative metrics come into play, which will be addressed in Part 3.  

- Kevan Huston

Monday, November 5, 2018

Measuring Information Resource Value, Part 1: Vendor Supplied Data

One of the guiding principles of this blog is that you cannot determine the value of an information resource if you can't measure its usage or who is using it.

This seems axiomatic, but in practice the problem is more complicated than it first appears:
  • What do we mean by measure?
  • What do we mean by user? 
  • What do we mean by usage? 
  • What do we mean by value? 
I will explore each of these concepts in subsequent posts. But to begin, we must consider a simple question: should you rely on the usage data your vendors give you?

Simply put: no! Whenever possible, do not rely on vendor-supplied usage data.

This is not to disparage vendors qua vendors or to suggest there's anything nefarious going on. Your vendors are your partners. They aren't padding your usage stats or anything like that. We're on the same side. Usually.

So what's the problem?

First, vendors do not capture users and usage in the same way, which makes apples to apples comparisons of competing vendors much more difficult. Vendor A may have a different definition of "user" from Vendor B.  There are many ways to define user: is the person merely registered to use the service? Does he count as a user even if he doesn't log in for months on end? What about an employee who simply signed up for marketing collateral but didn't register? Some vendors will (somewhat dishonestly, imo) characterize these people as "users" - particularly if you have an enterprise license to the product.

Defining "usage" is even more complicated than defining users. Some vendors may include newsletter referral clicks as a page view while another vendor may not. Does a user who simply logs in count as usage? Did they visit any pages? Download any content? Some vendors count web page views and downloads differently; some count them the same. Some vendors log records downloaded when you download Excel files; others do not. Some vendors measure time spent, while others simply measure page views. Some measure both.

Second, vendor-only usage data is unreliable or incomplete - some vendors simply may not have the metadata you need or be able to deliver it with the frequency you need it. This is the most important issue in my mind: you don't have the data you need to determine the value of the resources you're buying. Simple as that.

The basic issue with vendor supplied data is this: in order to properly assess value, you can't rely on usage and user data from different sources. And you can't rely on data sources that don't have the metrics you need.

You need to collect your own data. Vendor data won't cut it.

In Part 2 I will address the various means by which we can capture the metrics you need -- in other words, measure your products.

- Kevan Huston

Friday, November 2, 2018

Week in Review 11/3/2018


Singapore Press Holdings (SPH) is divesting its entire stake in wholly-owned subsidiary Holdings for $17 million, in a management buyout by ShareInvestor's chief executive officer Christopher Lee and chief operating officer Lim Dau Hee. 10/31/2018


Refinitiv Launches Quantitative Analytics Platform QA Direct in the Cloud with Microsoft Azure
Building on its commitment towards consumption of its financial data in the Cloud, Refinitiv, formerly the Financial and Risk business of Thomson Reuters, has launched its quantitative analytics platform QA Direct in the Cloud through Microsoft Azure. 11/1/2018.

S&P Platt's & International Exchange announced the launch of a new suite of North American natural gas indices to better capture dramatically altered supply and demand flows, regional price differentials and increased spot market volatility in the US and Canada. 11/1/2018.

IRI releases IRI Complete Audiences, which combines IRI’s audience targeting solutions to help advertisers select an audience composition that best fits their campaign objectives.  11/1/2018.

Equifax announced two new attribute solutions for clients, specifically in the areas of alternative data and machine learning. These attributes will help customers expedite innovation, support faster decisions, and improve speed-to-market. 10/29/2018.


Big Squid announces its recently-signed partnership with Tableau to bring its automated machine learning platform, Kraken, to Tableau customers.

Private Equity

Backstop Solutions -  provider of cloud-based efficiency software for the institutional investment industry, raised $20 million in funding this week in a round led by Vistara Capital Partners, veteran asset management investor Roger Kafker, former COO of Morningstar Tao Huang and President of Huizenga Capital Management David Bradley. 11/1/2018.

Consilium Crypto -  Developer of a data capturing platform designed to extract financial data from the market news. They received CA$125K from Holt Incubator. 9/17/2018.

Sentieo - Provider of a financial data platform intended to facilitate research on equity and investment portfolios for making investment decisions.

The company raised $19 million of Series A venture funding in a deal led by Centana Growth Partners on October 30, 2018, putting the company's post-money valuation at $58.4 million. 10/30/2018.