MCXTradeshow season is in full force as many Serene-AST team members gear up to head to Modern Customer Experience 2017 April 25-27 in Las Vegas, NV!

Modern Customer Experience is a conference where more than 3,000 attendees will meet to network with industry peers and thought leaders, receive hands-on training, and listen in on all the sessions.

Stay tuned for more information about the Serene-AST kiosk, presentations, as well as general updates while our team is there!

Tagged with: , ,

MDM and Big Data Make Each Other Better

So far, we have discussed Enterprise Data Governance topics. This is the third blog post in a series exploring Enterprise Data Governance.  In the first one, we briefly defined transaction data, metadata, master data, reference data, and dimensional data. In the second part, we further explored reference data and its role in data governance solutions. In the third part, we discussed data governance needs within Financial Services, a highly-regulated industry, and how other industries can benefit from these capabilities.

In this installment, we bring Big Data into the discussion.

Big Data allows companies to process data sets that are too large to handle by traditional means.  These data sets can originate from within the company; for example, a large airline may produce massive volumes of diagnostic data every hour, which is far beyond what is cost effective to store long-term.   Many companies are focused on data originating from sources outside the enterprise, such as social media, financial instrument performance, or weather monitoring.  With so many varied sources of Big Data available, can big data be governed?  If so, is it worth the effort?

Before answer those questions, it’s important to point out that Big Data vendors may be pushing features of their software solutions instead of discussing big data governance.  Product vendors tend to discuss Big Data use cases from the factory perspective; in other words, the types of data sought or the processes being built.

Vendors will typically cover information such as:

  • Social Media Exploration
  • Internet of Things
  • Data Warehouse Modernization

However, it is essential for Big Data Strategy to include the ability to drive value from Big Data insights across relevant use cases, since use cases drive the investment. That’s where Master Data Management (MDM) comes into play.

The following should be considered:

  • Customer Analytics
  • Product Marketing Effectiveness
  • Operational Efficiencies
  • Merger and Acquisition Impacts
  • Market Opportunity Analysis

The key is understanding what value propositions are sought when investing in Big Data solutions; this will allow companies to gain a competitive advantage. Rather than attempting to govern what may be “ungovernable,” MDM seeks to bring clarity to the key aspects of the business that drive performance. This, in turn, lends clarity to key business drivers that can be improved through Big Data analysis. In other words, MDM facilitates an increase in ROI from Big Data investment by focusing on driving analysis from well-governed enterprise data.

One of the fundamental Big Data principles is that greater insights can be attained from aggregations and statistics than can be gleaned from any individual record.  For example, in order to analyze consumer sentiments regarding a product, the company may mine social media for data. However, this produces some challenges: brand sentiment is often easier to analyze than sentiment towards specific products. To solve this, MDM is fundamental, and mines Big Data based on a cleansed and consolidated list of products.

All companies need to address similar challenges just to obtain the right subset of Big Data to analyze.  Once companies have assembled the proper datasets, what separates their effectiveness in the analysis stage is the ability to leverage master data to create meaningful aggregations.  A company that can analyze customer sentiment across geographic, business region, and operational cost dimensions will be able to make more rapid and meaningful business process adjustments than a competitor that only considers geography.  Only enterprises with well-managed MDM programs can make adjustments to business practices based on this analysis with confidence.

After the initial implementation, an effective Big Data strategy will plan for growth along the capability-maturity learning curve.  A useful analogy is how master reference data is used to manage acquisitions in a phased approach.  When a business is acquired, its chart of accounts is mapped onto the parent company’s chart to produce consolidated financial results.  Sometimes the parent company’s chart of accounts must be extended to accommodate the new business.  These data sets and mappings then make their way into the data warehouse.  For conglomerates, that may be as far as it goes, but in many cases the acquired business ultimately moves to the parent company’s chart of accounts and systems, where MDM then supports a full-blown financial transformation process within the acquired business.

Big Data follows a similar progression, where master and reference data provide the mappings for external, unstructured data sources to align with internal data sources for analytics.  As the Big Data processes mature, they influence governance processes, which extends the validated code sets and mappings to accommodate the high-value, unstructured data sources.  This establishes an ongoing feedback loop between MDM and Big Data that increases the effectiveness of both.

Process alignment between MDM & Big Data is critical to maximizing these synergies.  There are a multitude of valid technical options, which are of secondary importance to the business and data governance use cases.  For example, many data architects have a preconceived notion that MDM should push master data into the data lake to better support the Big Data best practice of “Transform in Place”.  While this is certainly an option, solutions like Oracle’s Big Data Appliance include highly scalable technologies that allow Hadoop file storage to be directly accessed by SQL and integration technologies (bypassing batch MapReduce processing entirely), making mapping and transforming unstructured data in Middleware an extensible approach.

In summary, Big Data analytics resemble traditional Data Warehouse analytics in that the better the data is governed, the better the insights from analysis will be.  This will always be true, regardless of the technologies utilized.

Tagged with: , ,

March 8th saw the international celebration of Women’s Day, a day that commemorates that movement for women’s rights and the observance of women’s achievements.

So, on March 8th, 2017, our office in Pune, India also held a celebration, with some tasty snacks, of course!DSC00221

Tagged with: , ,

This is the third blog post in a series exploring Enterprise Data Governance.  In the first one, we briefly defined transaction data, metadata, master data, reference data, and dimensional data. In the second part, we further explored reference data and its role in data governance solutions. For this installment, we will discuss data governance needs within Financial Services, a highly-regulated industry, and how other industries can benefit from these capabilities.

Most consultants would guess that data privacy is the primary data governance concern for most Financial Services executives.  Data privacy is a critical concern, and cannot be ignored in the normal course of running a stable and profitable Financial Services business.  Maintaining profitability also requires complete, timely, and accurate data to support operational decisions that align with the company strategy, in addition to regulations that require unprecedented levels of transparency and accountability.

The Sarbanes-Oxely Act was passed in 2002 to protect investors from companies’ potentially fraudulent accounting activities. It is well known that this act, which affected all US corporations, legislated individual responsibility, including some personal liability, for key executives in ensuring the accuracy and completeness of financial statements.

The Financial Transparency Act of 2015 further requires that US companies in the Financial sector make the data their financial statements are based upon open and searchable by regulatory bodies, such as the SEC.  There is also a lesser-known provision within the Financial Transparency Act that states that maintenance of reference data supporting financial reporting be made available for audit, including who made the change and when.  It explicitly states that if reporting hierarchies and selection criteria are maintained in spreadsheets, those spreadsheets need to include macros that accurately capture and retain the required fields to support regulatory audits.

Many of the largest Financial Services providers in the US, including American Express, Bank of America, Chase Bank, Wells Fargo, and dozens of others were prepared to meet these regulations since they were using Oracle Data Relationship Management (DRM) to master critical financial reference data.  DRM also has a large global customer base, and it is no small wonder that DRM not only meets and exceeds these stringent regulatory requirements, but also allows its customers to manage reference data across a broad array of enterprise systems, data warehouses, and reporting solutions from a single point of entry and validation.  This is key for an industry full of behemoths that have grown via mergers and acquisitions, often requiring them to manage extreme complexity in mapping their internal management processes to their externally reported line of business financial results with full confidence in both their accuracy and their audit transparency.

For the past decade, DRM has been the most powerful and complete reference data and dimension management solution commercially available, but its use has historically been mostly limited to the upper echelon of industry leaders due in part to both its cost and its marketing focus.  That will change when Oracle releases the next generation of DRM on the Cloud.  An early release was demonstrated at Oracle Open World 2016 as Dimension Management Cloud Services, and we are hopeful to see its production release this year.  The Cloud promises to make this technology, which has the power to manage the most complex business models in existence, available to a broader customer base at an affordable price, with greatly simplified setup procedures.

Tagged with: , , ,

This is the second blog post in a series exploring Enterprise Data Governance.  In the first post, we briefly defined transaction data, metadata, master data, reference data, and dimensional data.  That discussion primarily focused on transactional data and metadata, and can be found here. In this post, we will further explore reference data and its role in data governance solutions.

As we move beyond transaction data and metadata, and into the realms of master and reference data, most academics and analysts tend to focus on solutions and methodologies rather than attempting to clearly differentiate between the types of data that need to be governed.  Not only does this introduce a solution bias, but it also leads to a tendency to lump these data categories together in master/slave relationships and leave it at that.  For example, reference data is commonly classified as a subset of master data, and dimensional data as a subset of reference data.

Technically, there is nothing inaccurate about these assertions, but it would be a mistake to think that a single solution can fully address all of them without first gaining an understanding of the different challenges involved in governing these various types of data.  Only then can we accurately assess the solutions and technologies that are best suited to the task.  For this purpose, we will treat master data, reference data, and dimensional data as separate, distinct categories from a governance perspective.

Reference data is the easiest of the three types to understand.  It is made up of various lists and code sets that are used to classify and organize data.  Country codes, industry codes, status codes, account types, and employee types are among the many examples of reference data.  Reference data sets can vary wildly in size and complexity.  For example, there might only be a dozen or so valid account status codes, whereas there may be over a thousand valid industry codes.  Code sets related to product SKUs, financial instruments, and the like can be much larger, ranging into the hundreds of thousands or even millions of records in rare cases.

While the concept of reference data is easy to grasp, there can also be significant complexities that need to be addressed.  Some reference data sets are standardized by regulatory, or governing bodies, such as the International Standards Organization, which maintains standardized lists of country codes among other things.

Another example is the US Census Bureau, which maintains the North American Industry Classification System (NAICS).  It is common for companies to require internally managed alternate code sets as well.  For example, an COTS solution may include US territories in an internal State table, requiring this alternate list to be cross-referenced to standardized state and territory code sets for regulatory purposes.

Other reference data sets need to be controlled directly by the enterprise since they relate to how business is conducted.  Sales territories, lines of business, and departments are common examples.  As mentioned previously with the State table example, this can also include the configurations of code sets within applications, such as employee types and account status, when custom business processes need to be accommodated.

From a governance perspective, mastering reference data goes beyond maintaining traditional lookup tables.  The ability to maintain well-documented business and technical definitions of code set values, including data versioning and audit history, are essential.  Functionalities for maintaining and validating mappings between related code sets are also of vital importance.

Keep watch for the third part of this blog series, Understanding Enterprise Data Governance!

Tagged with: , , ,

CPQ, media, demoSerene-AST is pleased to introduce its CPQ for the Media industry video demonstration. We’ve developed a revolutionary solution, specific to the media industry, that drives income for companies with a digital presence by leveraging Oracle CX Cloud solutions.

In the video, three separate use cases are demonstrated:

To view the full-length video, please click here. Also under the same account, separate videos for each use case can be found.

Look forward to more videos in the future!

Tagged with: , , ,

As everyone in our global locations celebrated and welcomed the new year, AST’s branch in Pune, India had something extra special to celebrate! On December 22, 2016, the Pune branch opened the doors of a new office location with an official Ribbon Cutting Ceremony.

IMG_5243

IMG_5238

IMG_5237

Congratulations to everyone involved in opening the new space! We are excited about our continuing growth and expansion, and hope that everyone continues to follow us in this exciting journey.

Tagged with: , , ,

From executive sponsorship to the execution of effective, collaborative work flows, human interaction is critical to the success of any data governance (DG) initiative.

Typically, the technical aspects of data governance are much easier to control. Given the enormity of the topic, we will first define the types of data involved and introduce some of the key technologies that can be leveraged.

First, a few simplified definitions:

  • Transaction Data is data that describes an event. It makes up the majority of data found in data stores within applications and data warehouses.
  • Metadata Data provides definitions and descriptive models of various data sources, data elements, and processes.
  • Master Data provides a best version enterprise view (or golden record), linking many references to a business entity (i.e. customers, locations, products, suppliers, etc.)
  • Reference Data includes internally and externally defined classification schemes (codes and types) required by many systems.
  • Dimension Data includes master and alternate hierarchical structures used for financial systems, data stores, reporting, and analytics.
  • Data lineage is defined as a data life cycle that includes the data’s origins and where it moves over time. It describes what happens to data as it goes through diverse processes.

All business applications make heavy use of transaction data. Note that transaction data conforms to metadata and can include all the other types of data. For example, you will find elements from all of the data categories listed about with the Oracle Enterprise Resource Planning (ERP).

Metadata describes how data is formatted and organized, how it is used, and can also include allowed values. It does not include the actual data values that relate to specific business events. All transaction data elements will typically correspond to fields in a database table, which are, in turn, well described by multiple metadata solutions. The Oracle database uses a specific metadata protocol, descriptive language (DDL), to define and create the table.

Broader metadata tools, like Oracle Enterprise Metadata Management (OEMM), can also model processes that interact with the database, such as data transformation flows and business workflows. OEMM also allows the business and technical metadata definitions to be governed via workflows, and supports detailed data lineage analysis.

In our next update, we will explore enterprise data quality, business intelligence, and master, reference, and dimensional data technologies, and their roles in data governance solutions. Future updates will also include data authoring versus data consumption, big data, and the cloud, and their impacts on DG initiatives.

To learn more about utilizing Oracle’s Enterprise Metadata Management and Data Quality products, click here.

Tagged with: , , ,

On September 1, 2016, Serene-AST, LLC hosted an informative webinar, highlighting our solution that enables Salesforce customers to enjoy enterprise-class data management by leveraging Oracle Customer Data Management.

The solution seamlessly integrates Salesforce and Oracle Customer Data Management, allowing the Salesforce CRM system to enable real-time Enterprise Data Management/Data Quality solution. Unifying all data to one system allows organizations to consolidate, validate, and continuously cleanse customer data in order to provide a comprehensive and accurate view to their sales and service teams.

If you missed our SFDC webinar, don’t worry! We’re hosting another free and informative webinar at a later date. This time, we’re discussing challenges in the High Tech and Semiconductor industries, and how the Serene solution, built on Oracle CX Cloud, can improve processes, revenue, and overall best practices for your organization. This solution helps organizations truly understand their sales pipeline and forecast via complete information gathered from CRM systems.

We look forward to your attendance, and sharing our knowledge of this incredible solution with you.

At the start of the year, Forbes put together a list of the most visible trends relating to CX. Now that we’re halfway through 2016, we can analyze these trends to see what was accurate, what was not, and what we can expect for the second half of 2016.

While all of the points made by Shep Hyken, the article’s author, are accurate, here at AST, we see mobile, omni-channel experiences as the top CX drivers.  Customers look to deliver customer service through mobile devices using geolocation, push notifications, SMS, social media, and mobile-friendly features.  Conversely, drivers such as YouTube for service have a reduced impact on CX.  Does your organization see a difference in the predicted trends?