Thanks for this suggestion and for the many comments and upvotes. I realize that this is a pain point.
I have a few shorter-term workarounds to summarize as well as some information on the longer roadmap in this update. I know these shorter-term workarounds aren’t perfect (I summarize the pros and cons), but I’m posting them as they may help a few folks.
Workaround 1) When data changes to static data need to be made, use a “relink the table” pattern
One can “cleanly rescript” a static data table in SQL Source Control by:
- Unlinking the static data table
- Committing
- Relinking the static data table
- Committing
Pro: This works with the GUI and requires no special knowledge or comfort with TSQL. This may help folks with just a few static data tables.
Con: This requires extra steps and results in extra commits in the history, which I realize can be annoying, particularly if the team doesn’t squash commit history when merging or do similar things.
Workaround 2) Use a post-deployment script to manage static data tables instead of linking them
Another option is to not use the “link static data tables” feature at all, and instead manage the static data table yourself with a custom script.
I’ve added a code sample called “post-deployment script to manage static data for a table with an identity column” into our reference on post-deployment scripts: https://documentation.red-gate.com/soc7/common-tasks/working-with-pre-post-deployment-scripts/static-data
Pro: This gives you complete control over the TSQL.
Con: Your post-deployment script might become quite complex and create its own issue in merge conflicts, depending on how many static data tables you have and whether you are using the post-deployment script for other things.
An alternate approach using SQL Change Automation
If customers have licenses for SQL Change Automation, the SCA Migrations approach does not have this issue and has additional features for flexibility with static data:
- Since migrations only script out the data which has changed, this scripting issue of the entire table doesn’t occur in SCA – it only ever needs to generate commands for what has been modified since the last migration
- Supports column filtered static data tables in the SCA plugin in SSMS
- Supports multiple post-deployment scripts, in case there is a preference to manage static data tables in dedicated post-deployment scripts
- Allows approaches like bulk loading larger static data tables by supporting SQLCMD variables in migration and post-deployment scripts
If there are other use cases where a migrations approach would add value to the team – such as easily being able to run data modification statements as part of deployments, or being able to customize the TSQL for schema changes – then changing from SQL Source Control to SQL Change Automation might make sense.
Roadmap
I’ve chatted with our engineering teams and Redgate does have the desire to better support this scenario longer term. This is not a “quick fix” scenario and will likely require a rethinking of the way static data is managed within the tool.
While the “rethinking” bit of this has begun, we don’t specifically have this on the SQL Source Control roadmap (https://www.red-gate.com/products/sql-development/sql-source-control/roadmap) at this point. We do have improved support for use with Git feature branches as a future candidate on the roadmap, and we do know from research with many customers that static data is quite important to folks. While I can’t make any promises, I think we are likely to be able to offer an improvement in this area.
Currently the RedGateDatabaseInfo.xml file has a node called <DataFileSet>, underneath which is a node (<DataFile>) for each table that is static data tracked.
The top node under the <DataFileSet> (before the individual <DataFile>'s) is a count of all the <DataFile> nodes below.
Can this count be done in the code?
I am forever solving merge conflict that are just that count node having the wrong value.