By Gary Allemann, MD at Master Data Management
Decision support, or business intelligence (BI), has been a major driver for CIOs for many years. The current focus on “big data” and social media analytics is a logical extension of inward facing analysis capability to allow decision making and using the wealth of information available on the web.
Most BI sales, and projects, are delivered on the premise that any decision is better than no decision. Decision makers need information in order to make a decision and it is crucial to look for not only BI tools to assist with contextualising information but also advise and services to maximise the most out of the tool, including big data. Big data’s three dimensions – volume, velocity and variety – represent an extreme view of the challenges faced in any data analytics project. The question is: How do we consolidate multiple data sources, with varying levels of quality, and deliver analysis that is timely and relevant?
Analytics projects are tricky to say the least. The CEO needs his report out within an hour but the data quality means that the numbers cannot be generated. So IT works around the problem – maybe they ignore the records that do not hold information in the right format, or they substitute invalid values with valid ones.
More than often, the data is ‘fudged’. The inputs are manipulated so that the outputs we need can be generated while they are still relevant. The problem is not that this is done purposely but is rather a necessary evil and by product of working in an imperfect world. Most of the time, it will not have a significant impact on decision making, except when it does.
Decisions need to be based on an evaluation of the ‘best information available’. It is when poor information is presented as good information that the waters get muddy. Are business decisions being made based on truth, or based on truthiness?
According to Stephen Colbert, “Truthiness is what you want the facts to be, as opposed to what the facts are. What feels like the right answer, as opposed to what reality will support.”
If information has been significantly manipulated to provide a report surely the decision maker deserves to understand this. What is the level of confidence in the underlying data? Was key information missing, or did it vary widely and inconsistently from the previous quarters numbers? Could this have been an error in translation or amalgamation?
Legislation and regulations such as King III, the Basel Accords and the Solvency and Asset Management regime all recognise that key indicators such as financial forecasts or risk models can only be accurate within the margin of error supported by the underlying data – and seek to ensure that the public is protected against risk by penalising companies and individuals that cannot provide a measure of the quality of the data supporting key public metrics.
Common sense suggests that we need to give our business leaders similar protection by providing an indicator of confidence on all reports linked to the quality of the supporting data. If they need to make a decision based on gut feeling, surely this is better than making a decision based on truthiness?