Our client had numerous database tables shared across multiple applications. Data synchronization between source and target database platforms were implemented during the early adoption to meet specific business objections. This limited the ability to perform initial data loads that were required when new applications were onboarded because any impacted tables needed to remain online. The typical solution would be to file imports into staging and common table expressions into the destination. However, this solution does not scale due to the size of certain tables and the complexity of identifying batches for parallel execution. A thousand tables would require a thousand unique implementations, each one with its own combination of keys to delineate segments of data that can be migrated in parallel. The time it would take to implement a solution for each migration was not feasible. Additionally, the runtime to execute a single table migration with 600MM records using CTEs was tested at around 5-10MM records per hour, which would take several days to complete. The size of the segments was not small enough and the number of parallel processes available was not large enough to make this a workable solution.