Make it faster!!!! Improve Performance!!!
Oog, each commit is like five minutes. Neither standard commits of source code (same SVN server) nor redgate SQL Compare actions (same local database server) take that long!
Keywords: Performance, Speed, Fast
Everytime I go to the Commit changes window the source control plug in takes a long time to refresh the changes. The step that takes the longest is step 3 out of 4 "Registering database". Using SQL Server 2008 / SSMS 2008 and SVN. We were using TFS but moved to SVN hoping the performance would be better.
My gut feel is that that the change detection algorithm is too slow or inefficient. Perhaps you can use timestamps to improve performance buy not having to check all of the objects every time.
This is making the product unusable for my team.
Seems that since the release of SQL Source Control 220.127.116.1108 step 4 has been taking a very long time. I don't know if this is just my set up or is happening for everyone. Also, if I try to do a dedicated database development then step 4 never finishes.
Improve the speed and reliability. Use the same engine as your compare tools, which are very fast. TFS integration is very slow. Your last version was faster too. The new and fancy animation on the logo is nice, but I would trade it for a rollback.
Calculating changes for large databases is very slow
600+ tables (270+ static) and 2600+ procs means that "Determining latest version" takes longer than allowable to resolve. I literally let this run and went for a 15 minute walk and it did not finish. Please work on scalability for large databases or I cannot recommend this for purchase.
We have been doing a lot of work on SQL Source Control lately. We’ve recently improved performance when you link a database, go to the commit and get latest tabs (especially subsequent visits where we can rely on data being cached), and when selecting/deselecting all on the commit and get latest tabs.
If you are still experiencing performance problems, please make a new suggestion that is very specific to what you would like us to work on. Is it a specific step in the commit process that is taking a long time? Is it viewing history that is taking a long time?
We’ve also started to work on this suggestion, Don’t refresh the commit/get latest automatically, https://redgate.uservoice.com/forums/39019-sql-source-control/suggestions/462220-don-t-refresh-the-commit-get-latest-automatically, which will also help with performance.
Contact email@example.com if you can share your database schema with us for performance testing purposes.
If you are experiencing problems with server load, please see http://documentation.red-gate.com/display/SOC33/Changing+or+disabling+the+database+polling+interval. Contact firstname.lastname@example.org if you have any questions about this.
SQL Source Control Product Manager
I would like the ability to 'Refresh Selected' or similar. Imagine I refresh the 'Commit Changes' window and get a bunch of differences. Following the principal of atomic commits, I really want to select different changes and group them into single commits... and then having committed those changes with one comment, I would like to go on and do the same for a different set of changes. It would be nice if I could just clear the selected items, or 'refresh' them, just to confirm that they had no further changes. That would leave the list a little clearer for the next commit.
You guys would probably like all our devs to use Source Control, but I am afraid it is not quite ready to be used by devs yet (performance being a key issue).
join free commented
Trial version Data Compare 10 error: system.outofMemory.exception even though the RGTEMP system variable set to a large external disk folder. No success even after following these steps: http://documentation.red-gate.com/display/SDC10/Troubleshooting+System.OutOfMemoryException+during+comparison
We have exactly same issue. we are using sql 2008r2 and 2012.
We have one database with 500 tables and step 'Registering a database takes' more then 30 minutes !!! This is ridiculous. How I can use it on larger dbs.
I experienced the same problem
Dathan Liblik commented
We love this product theoretically, but the performance is spectacularly bad in not one, but all of: speed (5-6 minutes per sync), memory consumption (GBs), TFS traffic (high), reliability (crashes SSMS all the time), data integrity (at least once a month we have to manually do a file-based check-in to rectify some confusion in the sync that then leaves the product unusable until the check-in is manually resolved).
I don't want to be negative, but this is such an outlier to normal red gate quality, and your premium prices mean you should have a stellar tool with maybe an occasional bug. This is two years running now. What could be more important?
Red gate - get on this. It's a major blemish to your otherwise excellent standards. Please.
18.104.22.168 Resolved the issue for me.
Just tried 22.214.171.124 and the initial syncing issues are still there. I outlined two observations the following forum post:
I'm going back to 126.96.36.19929 again. =/
@Nicholas: Yeah, I also tried the latest and experienced the same issue. So I'm staying with 188.8.131.5229 for now.
Nicholas Orlando commented
So the issue does still seem to be in 184.108.40.206, but only when you first connect the database.
Nicholas Orlando commented
Still seem to have the issue in 220.127.116.11
Update: Downgrading to 18.104.22.16829 resolves the issue for me. It's a work around. I hope they make that installer available for others to download while they work on this.
Interesting, my Step 4 has greatly increased in time but I thought it was all related to this duplicate trigger error that Red Gate is working on. Apparently not. So I'm adding a vote for this one.
Coders are not very patient to sit there and listen to a disk spin and spit.
it appears that connection logic is broken and is ignoring the enabled transport settings. I consistently receive the "Named Pipes Provider" error during step 4. Why is red-gate trying to use Named Pipes when it is disable on both the client and server machines? Makes the tool completely unusable
So, true. It's now taking 5 minutes or more to complete step 4. Not sure how this is "A more reliable and less resource-intensive method for giving the username for changes in the shared model." per the release notes. I've been using RG tools for nearly 10 yrs now and have taken them into numerous shops. But over the last several releases of SQL Source Control it seems like the product is moving backwards.
I'm running on the "dedicated" configuration. Step 4 does take a very long time, however, it does complete for me. It has completed on two of my smallest databases. The larger databases are still running after ~20 minutes.
In order to improve performance, you need to change the base algorithm. If you stick with current logic and compare everything every time - this product will never work as expected and will even be unusable on db's with fair amount of objects. You should compare only what is changed since the last sync. modify_date in sys.objects is your friend!
Ben Sproat commented
How about an update on this? 3.1 is out and the product is still unusable for one of our primary databases. The commit tab takes upwards of 20 minutes to come up if it doesn't time out first. It only ever works once or twice and only if that is the only database we have linked. After the 1st or second refresh we run out of memory. If i have to work in that database i load it up on a separate machine click the commit tab and go do something else for a while on my normal machine and hope it actually gives me results. If it fails reboot and try again.
We find that after installing and using SQL Source Control, latest version, we routinely get out of memory exceptions in management studio. The only solution is to close and re-open management studio.
Also, I get very high tempdb usage for my sessions on the db side when I'm using SQL Source Control. The usage can be as high as 8 GB. It is quite ridiculous. This stays in use until I close and re-open management studio.
As for performance, we have one db that is around 120 GB with hundreds of sprocs and hundreds of tables, and it is painfully slow in SSC. However, we have another db that is over 200 GB with an equally large number of tables and sprocs, and it is quite fast. The database server versions are the same, and the db compatibility level is the same.
Jason Kochel commented
Same issue. What I don't understand is that the object explorer tree shows the blue dots next to the changed objects (so it's tracking in the background) but when I go to 'Commit Changes' it takes a long time to offer the list of changed objects.
'Get Latest' is even worse. Perhaps it's because I'm using Mercurial? I'll 'hg update' to a particular branch (which updates the SSC-created .sql files on my local drive), but when I 'Get Latest', it seems like it's going back to Mercurial or maybe scanning the entire directory (~7500 objects) to see what's new.
Seems like a lot of Mercurial users on this site. Perhaps a more native integration is possible?
Same problem, performance is a problem. We do not have really big db, around 200 tables, ~90 sp and ~50 views. Takes more than 2 min just on "registering working database", then 2-3min more for 3st steps and then around 3-4min on the last "calculating changes. Doing this at least 3 times/day and with 4 developers so need to to updated and commits is annoying