A Chinese proverb says, “The best time to plant a tree was 20 years ago; the second-best time is now”.  Let’s assume you’re already up and running with a big data Hadoop platform for advanced analytics use-cases.  Perhaps you’ve ingested multi-structured data from disparate sources and are performing product delivery/development for proofs-of-concept.  In order to organize the data associated with your products, make it easily searchable/findable, and rapidly provision to your end-users, a Data Catalog is necessary.  As with the tree, the best time to implement a Data Catalog (DC) is during early planning stages; however, the second-best time is today.

There are various associated use-cases relating to ‘why’ a DC is necessary:

  • Fill in the gaps: You’re deep in the midst of new failure analysis and find that you’re missing 60% of maintenance start dates, due an error in the archive job; what other data sets might help fill in that missing data?
  • Explore what’s possible: You’re on the hunt for data keys that will let you hop-scotch from application to application; with a multi-step cross-reference, can you finally unlock those measurement logs from a one-time sensor study?
  • Test out hypotheses: Your team talks in anecdotes and examples; can you prove that there IS a seasonal correlation between new customers and off-season items?
  • Streamline or rationalize: You’re starting an application rationalization and want to trace data lineage from the system of record; how many different versions of “the truth” are there?
  • Learn from the traffic: You’re responsible for enterprise data governance, so you want the metadata about the DC; who is looking for what data, and how can you better meet their needs?
  • Find fresher data: The team’s monitoring report runs off of quarterly inventory losses that are allocated monthly to different organizations; can you track down the raw, weekly data so that the team isn’t surprised at month end?

These use-cases can be boiled down to three considerations: what data do I have (and therefore not have) in the lake, how can I provision data effectively to enable self-service analytics, and how do I classify data to be most useful.

What’s in the Lake?

A DC should provide similar functionality and user experience as a brick & mortar super-store.  Imagine your consumers needing to find proppant levels for the past 6 months for an unconventional well.  Similar to the sign-posts hanging from the ceiling in your local Costco, you should lead them to the right aisle; for example, Upstream → Production → Unconventional → Region → Well → Proppant → Time-Frame.  Spending some time brainstorming structure and multiple-paths to discovery will benefit end-users and increase their retention in utilizing the service.

Provisioning Best Practices

Once those users have found the right data, how do you get it in their hands?  First, a good relationship with your data source stewards is important; they need to feel secure to quickly allow data consumption across many requests, have line-of-sight on lineage for tracking derived data through transformation, and should help with tagging the data coming from their respective system(s).

Second, there should be a quick turnaround between request and provisioning; otherwise, end-users’ ability to leverage data for business decisions is limited.  As such, the DC should have inherent processes for automating provisioning when/where possible.  DevOps processes/culture can go a long way toward meeting the needs of the organization in regards to rapid provisioning.  Change managers are also essential for training those stewards on the tools.

Classification

Upon ingestion into the lake, metadata needs to be gathered and the data should be tagged – ideally by a representative (data custodian), with significant business knowledge who can differentiate and assign tags effectively.  As demonstrated in Figure 1, not all data is created equal, and various levels of rigor can be used for tagging, based on its intended use.

Figure 1: Different “classes” of data have different tagging requirements, based on intended use

If you’re up and running with your Big Data engine, perhaps you’re comfortable in piecemeal-procuring data for pilots and the like.  That can work during inception and early stages, but eventually, you will have new ideas coming through the pike and to-be product owners approaching you to understand what’s in the lake already and what they’ll need to source.  Being able to provide that information, as well as provision/classify it effectively, will buy credibility and can facilitate data gravity (the idea that the more data in a lake, the more data it will attract), which can be a key differentiator in the Enterprise Hub game.

 

Click here to learn more about our Data & Analytics practice area. Want to continue the conversation? Contact us at insights@enaxisconsulting.com.