A good way to avoid problems is to simply put a uniqueness constraint on a table column, e.g. a UNIQUE PRIMARY INDEX
in Teradata. This way, trying to insert non-unique values will cause an error and the program or workflow will fail. At first, this may make your workflow fail more often, but it prevents you from ignoring the problem and forces you to fix it. This is preferable to simply hoping for the problem to not occur - which is just wishful thinking. However, make sure to check what this kind of constraint costs you in terms of performance. A uniqueness constraint will typically mean that every INSERT
step includes a uniqueness check, which will slow down this step in the process, especially when the table gets bigger. If your data product frequently writes data into your table, the uniqueness constraint may be too expensive.