In the past year, TEC has published a number of articles about data quality. (Poor Data Quality Means A Waste of Money; The Hidden Role of Data Quality in E-Commerce Success; and, Continuous Data Quality Management: The Cornerstone of Zero-Latency Business Analytics.) This time our focus takes us to the specific domain of data quality within the customer relationship management (CRM) arena and how applications such as Interaction from Interface Software can help reduce the negative impact that poor data quality has on a CRM objective.
CRM is prone to more corrupt data than any other enterprise applications. Traditional back-office systems require a limited number of individuals to process data, whereas in CRM, almost everyone in an organization interacts with parts of the application. As a result, the probability of processing bad data increases. Ultimately, quality data is the foundation of successful CRM implementation and accurate customer intelligence. Past experience and research show that 50 to 70 percent of many CRM initiatives should be devoted to data quality. Consequently poor data quality hampers a company's ability to realize the return from their investment in a truly integrated CRM. Data quality, therefore, should not be considered a one-time exercise. It has to be integrated as a core element in managing a business.
Defective data quality leads to customer complaints and customer defection. Therefore, it is imperative to clearly define a standard for your data requirements and how these requirements should support specific business objectives. One common affliction of data quality is a result of the fast pace of business environments. Information is constantly evolving. People are constantly moving in and out of positions, and companies continually change their contact details. These results in a number of common issues involving
* Incorrect or inconsistent collections of customer details
* Duplicate records
* Inconsistent synchronization between multiple databases
* Multiple databases scattered throughout different departments or organizations, with data structured according to the particular rules of that database.
The duplication of data is the most common type of data quality issue. Customer record duplication is caused by a multitude of situations. It can occur from variations in spelling or assigning multiple addresses to the same customer profile. As a result, data becomes corrupted and is then misinterpreted. Having a customer in a database two or more times may create the impression that they are different customers. Businesses may lose money if, for example, they do a mailing campaign based on this customer base. Multiple letters sent to the same customer will double the cost of mailing and fulfillment and reduce company credibility in the eyes of customers.
The use of poor quality databases can also lead to the misinterpretation of a business operation by generating misleading customer analyses. Additionally, customer intelligence and customer behavior analyses are also used for fraud prevention and for customer retention. Therefore, unreliable data can lead to catastrophic results. As increasing evidence shows, business intelligence is key to CRM success and as demonstrated above, mining into poor quality databases has the completely opposite effect.
Alone, a stand-alone system is insufficient to tackle the underlying cause of poor data quality. Data quality should be considered a business issue and as such, businesses must create and institute enterprise-wide guidelines for data quality. A combination of people, process, and technology tools is required to establish data quality program.
With this said, an application's contribution to data cleansing remains a pillar to the overall success of a data quality strategy. Systems may implement a data cleansing method through the use of centralized data warehouses, the integration of data cleansing software, and the use of third-party data with the enterprise application. Overall, some of the available techniques are
* Data-validating process rules
* Centralized database
* Data cleansing technology
* Data scrubbing
Data scrubbing is the process of fixing or eliminating individual pieces of data that are incorrect, incomplete or duplicated before the data is passed to a data warehouse or another application. The aim of data scrubbing is two-fold: eliminate errors and redundancy, and bring uniformity to different data sets that may have been created with different or incompatible business rules. Data scrubbing can be considered a combination of technology and process. The business to business process (B2B) list provider Dun & Bradstreet (D&B) has developed a good example of a data scrubbing process. D&B has introduced a uniform coding methodology that associates each individual business with a D-U-N-S number. This number is unique and remains unchanged during the business' life span. D&B created this unique approach to reach and maintain uniformity within its own database. It also offers its customers the same service on a regular basis.
Another approach to achieve high-quality data is the use of program tools developed for managing data quality. These tools fulfill the objective of auditing, cleaning, and monitoring data. Companies may opt to develop tools in-house or to acquire a third party tool from a vendor specializing in data cleansing tools. A majority of data warehouse and business-intelligence vendors such as SAS Institute, Informatica, Experian, and Group 1 Software provide data cleansing options that sit on top of their database. FirstLogic and Ascential Software, which also have this feature, are considered by the market to be among the data quality oriented vendors.
CRM is prone to more corrupt data than any other enterprise applications. Traditional back-office systems require a limited number of individuals to process data, whereas in CRM, almost everyone in an organization interacts with parts of the application. As a result, the probability of processing bad data increases. Ultimately, quality data is the foundation of successful CRM implementation and accurate customer intelligence. Past experience and research show that 50 to 70 percent of many CRM initiatives should be devoted to data quality. Consequently poor data quality hampers a company's ability to realize the return from their investment in a truly integrated CRM. Data quality, therefore, should not be considered a one-time exercise. It has to be integrated as a core element in managing a business.
Defective data quality leads to customer complaints and customer defection. Therefore, it is imperative to clearly define a standard for your data requirements and how these requirements should support specific business objectives. One common affliction of data quality is a result of the fast pace of business environments. Information is constantly evolving. People are constantly moving in and out of positions, and companies continually change their contact details. These results in a number of common issues involving
* Incorrect or inconsistent collections of customer details
* Duplicate records
* Inconsistent synchronization between multiple databases
* Multiple databases scattered throughout different departments or organizations, with data structured according to the particular rules of that database.
The duplication of data is the most common type of data quality issue. Customer record duplication is caused by a multitude of situations. It can occur from variations in spelling or assigning multiple addresses to the same customer profile. As a result, data becomes corrupted and is then misinterpreted. Having a customer in a database two or more times may create the impression that they are different customers. Businesses may lose money if, for example, they do a mailing campaign based on this customer base. Multiple letters sent to the same customer will double the cost of mailing and fulfillment and reduce company credibility in the eyes of customers.
The use of poor quality databases can also lead to the misinterpretation of a business operation by generating misleading customer analyses. Additionally, customer intelligence and customer behavior analyses are also used for fraud prevention and for customer retention. Therefore, unreliable data can lead to catastrophic results. As increasing evidence shows, business intelligence is key to CRM success and as demonstrated above, mining into poor quality databases has the completely opposite effect.
Alone, a stand-alone system is insufficient to tackle the underlying cause of poor data quality. Data quality should be considered a business issue and as such, businesses must create and institute enterprise-wide guidelines for data quality. A combination of people, process, and technology tools is required to establish data quality program.
With this said, an application's contribution to data cleansing remains a pillar to the overall success of a data quality strategy. Systems may implement a data cleansing method through the use of centralized data warehouses, the integration of data cleansing software, and the use of third-party data with the enterprise application. Overall, some of the available techniques are
* Data-validating process rules
* Centralized database
* Data cleansing technology
* Data scrubbing
Data scrubbing is the process of fixing or eliminating individual pieces of data that are incorrect, incomplete or duplicated before the data is passed to a data warehouse or another application. The aim of data scrubbing is two-fold: eliminate errors and redundancy, and bring uniformity to different data sets that may have been created with different or incompatible business rules. Data scrubbing can be considered a combination of technology and process. The business to business process (B2B) list provider Dun & Bradstreet (D&B) has developed a good example of a data scrubbing process. D&B has introduced a uniform coding methodology that associates each individual business with a D-U-N-S number. This number is unique and remains unchanged during the business' life span. D&B created this unique approach to reach and maintain uniformity within its own database. It also offers its customers the same service on a regular basis.
Another approach to achieve high-quality data is the use of program tools developed for managing data quality. These tools fulfill the objective of auditing, cleaning, and monitoring data. Companies may opt to develop tools in-house or to acquire a third party tool from a vendor specializing in data cleansing tools. A majority of data warehouse and business-intelligence vendors such as SAS Institute, Informatica, Experian, and Group 1 Software provide data cleansing options that sit on top of their database. FirstLogic and Ascential Software, which also have this feature, are considered by the market to be among the data quality oriented vendors.
No comments:
Post a Comment