Skip to main content

Business Intelligence Maturity Model

By August 26, 2016May 7th, 2021Business Intelligence, Press, Resources

Business Intelligence projects have become commonplace amongst industry leaders over the past several years. What never ceases to amaze me is how different the implementations are from one environment to another and how often a setup is extremely strong in one area, while lacking in another. I stumbled across a really great document about IT governance that discussed process maturity and thought that some similar concepts could be applied in the BI domain.

The below tables are a collection of central ideas from our BI team’s experiences that are meant to be a guide in determining the level of maturity or completeness of the BI implementation. I’ll be the first to admit that they are not all-inclusive as project requirements can cover virtual miles of territory. But they certainly contain many of the core components necessary for a highly useable, secure, available, quality, visible set of analytics.

What categories, components, or ideas would you add to the list?

High Level BI Maturity Assessment

  1. Lack of Awareness – May have heard terms related to BI, but have little or no understanding of what is involved. There is no awareness of potential impact of data analytics.
  2. Aware – Have become aware of BI topics and some general benefits and features. May have minimal exposure to related technologies. Potential benefits are still abstract, but there is recognition that BI analytics would have a positive impact on the business. Could be some manually created spreadsheets or documents that are not easily shared or centrally stored.
  3. Entry Level – The first steps have been taken. The questions and problems that can be answered by data analysis have been defined. There is likely some one-off development that has been done using entry level self-service products or spreadsheets. Analytical collaboration and re-use between end users is difficult. Producing new analytics is time consuming, use inconsistent rules, and have data quality issues.
  4. Centralized Data Rules – Source data for analytics reside in a single point and is reused for all data visualizations. End user facing visualizations are still mainly contained within spreadsheets, but there may be some other relatively static reports that have now been created. Consistency of data rules has been improved by using a central source, but there are still many data quality issues and collaboration at the data presentation level is still difficult. Data latency is relatively high, with data being refreshed on periods of 24 hours or greater. Most analytics are still looking backwards into “what happened”.
  5. Clean and Shared – Data quality issues have been addressed and there are processes in place to continually improve. A platform has been chosen to deploy collaborative analytics that can be shared amongst users. Data latency has improved to hours or minutes. With data quality improvement, analytics are now starting to show relationships between characteristics to show not only what happened, but why it happened.
  6. Predictable – Data quality is excellent. There are secondary and tertiary data components now included that allow immense analytical power and flexibility. End users are able to create, explore, and share analytics in only a few clicks. Security models follow best practices. Data latency is closer to real time to all end users. Infrastructure personnel have insight and visibility into BI processes to monitor for any issues. Analytics can now begin to show trends and help predict what will happen next.

Attributes of a Successful BI Implementation

Self-service Usability Governance Control Security Repeatability Latency Data Quality Process Visibility
  • Static data is generated
  • Not easily modified by users
  • Changes to process are needed for sorting, filtering or grouping changes
  • Very few users
  • Still defining analytical questions
  • Generic access granted to users
  • No central processing documentation or knowledge shared
  • Many manual steps
  • Not a scheduled process
  • Low rate of success
  • Resembles source data
  • Has not been cleansed or translated to correct data types
  • Data uses manual verification to determine lag
  • Data is becoming more dynamic
  • End users can apply simple changes to sorting and filtering
  • Analytical value is growing and impacting a larger number of users
  • Goals for what should be analyzed have been defined, but not all are supported or created
  • Basic security principles have been applied to users
  • Some knowledge of update processes have been shared across users
  • Fewer steps, but still manual
  • Data refreshed regularly by users
  • Fewer steps, but still manual
  • Some basic rules have been applied, but are not consistent
  • Cleansing rules are applied on a one-off basis
  • Data exists in multiple sources
  • Still requires manual checks
  • May be a field or log file containing an update date
  • Users can now analyze what has happened in the past for the defined goals
  • Visualizations are limited to spreadsheets and basic dynamic reports, but are consistent across users
  • Security has been implemented at a group level
  • Groups are very generic (Admin vs Reader)
  • An automated ETL process is in place
  • Data lag is 24 hours or greater, but executed regularly
  • Manual intervention is no longer required for data updates
  • Processes succeed more often than they fail
  • Data has now been centralized to a single point
  • Some data rules have been centralized, but others are still one-off
  • Adding new cleansing rules is now simpler
  • Process execution now logs success or failure
  • Visibility into details is very limited
  • Analytical power has increased to now show not only what has happened in the past, but also allow analysis into why
  • Visualizations are more robust with ability to choose charts and indicators as well as sorting, filtering and grouping with ease
  • Analytics are easily accessed and shared by users
  • Some data can be viewed from offsite locations
  • Security groups have been well defined for individual roles
  • Access to reports/spreadsheets has been limited by user groups
  • Data lag has been reduced to hours, with the capability of updates throughout the day
  • Processes have been decoupled to allow for minimal impact to production databases
  • Processes succeed more than 80% of the time
  • Most data quality issues have been addressed by centralized cleansing rules
  • Processes are in place to capture quality issues to drive continued improvement
  • 80% of data is centralized and cleansed
  • Improved process logging at the detail level
  • Processes can be analyzed for efficiency on a source by source level
  • Amount of data being moved can be visualized
  • Added data components now allow for analyzing trends and determining what will happen next
  • Analytics have a multitude of self-service options for end users with great flexibility
  • Users can not only view, but re-organize and update analytics easily from offsite locations using mobile devices
  • Security groups are mature and consistent
  • Access restrictions have been added at the data layer so that users of the same analytics see only applicable data
  • Data lag has been reduced to almost nothing for all users
  • Data can be viewed from offsite locations with ease
  • Processing success is 99% or greater
  • Maintenance windows have been defined to limit downtime
  • 90+% of data is centralized and accurately cleansed
  • Secondary and tertiary data components and sources have been centralized
  • Infrastructure team has insight into all processes and can quickly identify and correct issues
  • Automated maintenance processes are in place to ensure processing optimization

Book A Discovery Call

Fill out the form below to schedule your 20-minute discovery call.

  • This field is for validation purposes and should be left unchanged.
Close