Hi Everyone…
In the last post I talked about “why” Metadata Extensions are useful and important (Building Metadata Extensions….”Why?”). Today I want to review the basic steps in a high level “methodology” that you can apply when making decisions about extensions and their construction that will help you meet the objectives of your governance initiatives.
Step 1. Decide what you want to “see” in lineage and/or accomplish from a governance perspective. Do you have custom Excel Spreadsheets that you would like to illustrate as little “pie charts” in a lineage diagram, as the ultimate targets? Do you have mainframe legacy “green screens” that business users would like to see the names of as “icons” in a business lineage report? Are there home grown ETL processes that you need to identify, at least by “name” when they move data between a flat file and your operational data store? Lineage is helping boost confidence to the users, whether they are report and ETL developers, or DBAs tracking actual processes, or reporting users who need some level of validation of a data source. Which objects are “missing” from today’s lineage picture? Which ones would add clarity to the users “big picture”. Each of the use cases above represent scenarios where lineage from the “known” sources (such as DataStage) wasn’t enough. There are no industry “bridges” for custom SQL, personalized spreadsheets, or home grown javascript. And in the green screen case, the natural lineage that illustrated the fields from a COBOL FD left business users confused and in the dark.
The “…accomplish from a governance perspective” in the first sentence above takes this idea further. The value of your solution is not just lineage — it will be valuable to assign Stewards or “owners” to those custom reports, or expiration dates to the green screens. Perhaps those resources are also influenced by formal Information Governance Rules or Terms in the business glossary. The need to manage those resources, beyond their function in lineage, is also something to measure.
Step 2. How will you model it inside of Information Server? Once you know which objects and “things” you want to manage or include in lineage, what objects should you use inside of Information Server? The answer to this is a bit trickier. It requires that you have some knowledge of Information Server and its metadata artifacts, how they are displayed, which ones exist in a parent-child hierarchy (if that is desirable), which ones are dependent upon others, what does their icon look like in data lineage reports, etc. There aren’t any “wrong” answers here, although some methods will have advantages over others. There are many kinds of relationships within Information Server’s metadata, and nearly anything can be illustrated. Generally speaking, if the “thing” you are representing is closest in concept to a “table” or a “file”, then use those formal objects (Database Tables and Data Files). If it is conceptual, consider a formal logical modeling object. If it looks and tastes like a report, then a BI object (pie chart) might be preferred. If it is something entirely odd or abstract (the green screen above, or perhaps a proprietary message queue), then consider an Extended Data Source. I’ll go into more details on each of these things in later posts, but for now, from a methodology perspective, consider this your planning step. It often requires some experimentation to determine how best to illustrate your desired “thing”.
Step 3. How much detail do you need? This question is a bit more difficult to answer, but consider the time-to-value needed for your governance solution, and what your ultimate objectives are. If you have a home grown ETL process, do you need to illustrate every single column mapping expression, and syntax? Or do you just need to be able to find “that” piece of code within a haystack of hundreds of other processes. Both are desirable, but of course, there is a cost attached to capturing explicit detail. More detail requires more mappings, and potentially more parsing (see further steps below). A case in point is a site that is looking at the lineage desired for 10′s of thousands of legacy cobol programs. They have the details in a spreadsheet that will provide significant lineage…..module name, source dataset and target dataset. Would they benefit by having individual MOVE statements illustrated in lineage and searchable in their governance archive? Perhaps, but if they can locate the exact module in a chain in several minutes — something that often takes hours or even days today — the detail of that code can easily be examined by pulling the source code from available libraries. Loading the spreadsheet into Information Server is child’s play — parsing the details of the COBOL code, while interesting and potentially useful, is a far larger endeavor. On a lesser note, “how much detail you need” is also answered by reviewing Information Server technology and determining things like “Will someone with a Basic BG User role be able to see this ‘thing’? “…..which leads to “Do I want every user to see this ‘thing’?”. Also important is whether the metadata detail you are considering is surfaced directly in the detail of lineage, or if you have to drill down in order to view it. How important is that? It depends on your users, their experience with Information Server, how much training they will be getting, etc.
Step 4. Where can you get the metadata that you need? Is it available from another tool or process via extract? in xml? in .csv? Something else? Do you need to use a java or C++ API to get it? Do you have those skills? Will you obtain the information (descriptions, purposes) by interviewing the end users who built their own spreadsheets? Is it written in comments in Excel? Some of the metadata may be floating in the heads of your enterprise users and other employees. Structured interviews may be the best way to capture that metadata and expertise for the future. Other times it is in a popular tool that provides push-button exports, or that has an open-enough model to go directly after its metadata via SQL. ASCII based exports/extracts have proven to be one of the simplest methods. Most often, governance teams are technical but do not often have resources with lower level API skills. Character based exports, whether xml, or .csv or something else, are often readable by many ETL tools, popular character based languages like PERL or similar, or even manipulated by hand with an editor like NotePad. I use DataStage because it’s there, and I am comfortable with it — but the key is that you need to easily garner the metadata you decided you need in the previous steps.
Step 5. Start small! This could easily be one of the earlier steps — the message here is “don’t try to capture everything at once”. Start with a selected set of metadata, perhaps related to one report, or one project. Experiment with each of the steps here with that smaller subset — giving you the flexibility to change the approach, get the metadata from somewhere else, model it differently or change your level of detail as you trial the solution with a selected set of users. Consider the artifacts that will have the most impact, especially for your sponsors. This will immediately focus your attention on a smaller set of artifacts that need to be illustrated for lineage and governance, and allow you to more quickly show a return on the governance investment that you are making.
Step 6. Build it! [and they will come :) ] Start doing your parsing and construct Extensions per your earlier design. Extension Mapping Documents are simple .csv files…no need for java or .net or other type of API calls. Adding objects and connecting them for lineage is easy. Extended Data Sources, Data Files, Terms, BI objects — each are created using simple .csv files, and/or in the case of Terms, xml. I suggest that you do your initial prototypes entirely by hand. Learn how Extensions and other such objects are structured, imported, and stored. As noted earlier, I will go into each of these in more detail in future posts, but all of them are well documented and easily accessible via the Information Server user interfaces. Once you have crafted a few, test the objects for lineage. Assign Terms to them. Experiment with their organization and management. Assign Stewards, play with adding Notes. Work with Labels and Collections to experience the full breadth of governance features that Information Server offers. Then don’t wait — get this small number of objects into the hands of those users — all kinds of users. Have a “test group” that includes selected executives, business reporting users and decision makers in addition to your technical teams. Get their feedback and adjust the hand crafted Extensions as necessary. Then you can move on and investigate how you’d create those in automated fashion while also loading them via command line instead of via the user interfaces.
Keep track of your time while doing these things so that you can measure the effectiveness of the solution vis-a-vis the effort that is required. For some of your extensions, you may decide that you only need a limited number of objects, and that they almost never change — and no future automation will be necessary. For others, you may decide that it is worth the time to invest in your own enterprise’s development of a more robust parser or extract-and-create-extension-mechanism that can be implemented as metadata stores change over time. This also makes it simpler to determine when it makes sense to invest with an IBM partner solution for existing “metadata converters” that already load the repository. These are trusted partners who work closely with all of us at IBM to build solutions that have largely answered the methodology questions above in their work at other locations. IBM Lab Services also can help you build such interfaces. When appropriate and market forces prevail, IBM evaluates such interfaces for regular inclusion in our offerings.
Ultimately, this methodology provides you with a road map towards enhancing your governance solution and meeting your short and longer term objectives for better decision making and streamlined operations via Information Governance.
-Ernie