I recently stumbled upon freak cases when I developed a simple heuristic to detect transactions with unusual prices. From historical transaction data, I calculated plausibility intervals for about 75,000 articles. The intervals were wide enough to allow for some volatility caused by seasonal changes in price, but sufficiently narrow to detect errors caused by people intentionally or unintentionally manipulating the price. The data was coming from hundreds of different warehouses, each one with different suppliers and different contracts for similar products. When someone entered a price that was significantly too high or too low, the system would require a clerk to check the transaction and verify that the price is correct or stop the transaction. One of the main concerns was the amount of transactions every day that a clerk would have to manually inspect. If this number was too high, the clerks would not want to work with the system. My employer was willing to overlook a few minor errors as long as the most important - i.e. most expensive - errors would be caught. I tested my solution on three months' worth of data, looking at the daily number of outliers detected by the system. Everything looked good. The number of cases to be handled every day was small enough to not overwhelm the clerks… except for a period of two weeks where the number of outliers suddenly exploded. I had prepared a visualization that was supposed to show how convenient my solution would be for the clerks, but the values for these days stuck out like a sore thumb. I looked into the data and discovered that most of the outliers from these two weeks belonged to a specific warehouse. Later, I learned that this warehouse was new and that some software error caused the transmission of incorrect prices. The error had remained undiscovered for days, with thousands of transactions with bad prices going unnoticed. The people on the business side were still busy cleaning up the mess.