Sunday, January 30, 2011

Retailer 1: Data from GDSN Datapools is too bad!

If you are talking to retailers about GDSN they always complain on the data quality of the data suppliers are providing. And I was always wondering how come?

Couple months ago I was invited by a large retailer to do a data quality analysis. Reason for them was that they knew they had a problem with their product master data but they did not know what really their problems were. Everybody in that retailer complaint about the quality of their own product master data but they did not have any metrics on what was bad.

So what did we do?

First of all we went through all the departments and talked to the operational people to understand what was bad about their product master data. From their we got an extensive list of problems in their product data. The three most interesting ones were:

1. The warehouse people complaint that the measurements often were missing or wrong so that they often had to redo the intake of physical goods because the automatically calculated shelf did not fit.

2. Also the people of the tourplanning complaint about wrong and missing measurements which leads to wrong tourplanning and "overfilled" trucks.

3. The C-level people complained about wrong supplier scorecards which led to very bad discussions and negotiations with their suppliers about esp. order fulfillment.

From their we focused on investigating how good or bad measurements are in the retailers system and to see if bad master data could somehow lead to wrong scorecards.

Our results were a little bit surprising:

1. 70% of their products did not have any measurements at all in their central master data managing system or they did only have default values in there (length = 1mm, width = 1mm, heigth = 1mm, weight = 1kg).

2. We took randomly some products and compared their measurements in their different warehouse systems (as they have multiple warehouses, they have multiple instances of their warehouse system and therefore they have each item multiple times). And in most of the cases the measurements differed significantly. As they were lacking the measurements in their central system they established a measurement process in their warehouses. But obviously the people there were not able to measure the same product in the same way. Most funny was what we found for one of the private label products. The measurement for that product differed more then 100%.

3. When we tried to figure out how master data issue could impact the scorecards we found out that their system did not support a flag whether a unit is orderable or not. We could prove at some sample scorecards that this was the reason for wrong orders and bad order fulfillment. The retailer ordered a number of units which was not orderable. Because the retailer was a very good customer to the supplier, the supplier executed that order (probably after internal manual correction), and send back the dispatch advise with the correct (orderable) gtin and a corresponding amount which differed from the order. And peng: the scorecard showed a delivery error. We also had cases where the manual correction of the supplier just was wrong and he in case delivered the wrong number. But all those errors were caused because of the missing "orderable" flag and needed manual intervention on the supplier side.

4. We compared the retailers product data to the data from a datapool and found out that the data from the data pool looked completly different.

There were a lot more outcomes and results from the data quality analysis but that would really exceed that posting. I think you have now an impression how bad the data was at that retailer and what problems that caused.

Now as this retailer is doing data synchronisation already for a couple of years I was really keen to figure out why the master data can be that bad and if this is really a supplier issue.

To make a long story short we found out that the retailer from an IT perspective had implemented everything quite well. He had implemented an approval workflow where the people in the buying department could look at changes and new items coming in and then decide whether they want to take that over or to change something.

So we decided to talk to the people who were managing this approval process.

Surprise, surprise!

We learned that they preferred NOT to work with that approval process. Mainly because there were so many changes coming every day but also because they were very unsure what would happen in their backend systems when they would approve a change. Would their backend processes still work? Or would that create some kind of failure and they would be hold responsible for that failure? Therefore they preferred to not change anything because by that they could not create any process failure - they thought.

When we then looked in the queue with the items to be approved we found item changes going back until 2005!

Final surprise came when we presented our findings to the middel management. One comment I will never forget was after the whole presentation: "The product data from suppliers has to be 100% correct, as long as nobody guarantees that, we cannot work with it.".

For gods sake the c-level people got the message and launched a MDM program to start improving their master data management.

No comments:

Post a Comment