SSIS Catalog (SSISDB) Cleanup

I really appreciate what MS has done w/ the SSIS Catalog. We have built-in logging at a level that wasn’t possible in prior releases, but that comes at a cost. The default retention is pretty high (365 days) and MS chose to handle cleanup using cascading deletes. This sort of coding makes life easier for the developers, but almost always performs poorly. That’s especially the case when you have 1 parent row with 100’s of thousands or child rows related to it.  The problem is compounded because the first time people realize they need to lower the retention limit is about the same time that the database is starting to fill up or has filled up. At that point, it’s too late to lower the retention by a large number because the corresponding delete will still cause issues.

I knew someone had written some code for this and a quick search on StackOverflow found this question and the answer from Bill Fellows ( b | t ) :

I’m including the code here for a quick reference, though I recommend reading his answer, up-voting it, and checking out his other resources. The only change I’d consider making is perhaps clearing out the child tables in batches of 1000-5000 rows at a time instead of all rows. I found that I had to work in increments of no more than 10 days at a time to delete history. Any more than that and the time to execute increased quite a bit. In this case, look for your max days currently retained, and drop the @DaysRetention variable to that less a couple of days to see what is removed. Keep dropping that until you reach the point at which you want to stop, set your SSIS Catalog retention days to that value, and you should be good from that point on.

It’s probably worth noting that in SQL Server 2012, MS didn’t include proper indexing by default. You may need to add some indexes in order to have the delete perform in a timely manner and avoid excessive table scans. The index issue was addressed in SQL Server 2014 and higher (and I think in SQL Server 2012 SP2 as well).

With thanks to Bill Fellows for the script from here:
SET nocount ON;
DROP TABLE #delete_candidates;
CREATE TABLE #delete_candidates
Adjust accordingly. If you've been set at the default of 365 days, you probably want to start close to that upper limit.
Adjust downward in appropriate increments for your environment. 10-20 at a time is usually a good start.
DECLARE @DaysRetention INT = 350;
INSERT INTO #delete_candidates
SELECT IO.operation_id
FROM internal.operations AS IO
WHERE IO.start_time < Dateadd(day, @DaysRetention, CURRENT_TIMESTAMP);
WHILE EXISTS (SELECT * FROM internal.event_message_context AS emc JOIN #delete_candidates AS dc ON dc.operation_id = emc.operation_id)
BEGIN delete event_message_context
FROM internal.event_message_context AS T
INNER JOIN #delete_candidates AS DC
ON DC.operation_id = T.operation_id;
END delete event_message_context
WHILE EXISTS (SELECT * FROM internal.event_messages AS e JOIN #delete_candidates AS dc ON dc.operation_id = e.operation_id)
BEGIN Delete event_messages
FROM internal.event_messages AS T
INNER JOIN #delete_candidates AS DC
ON DC.operation_id = T.operation_id;
END Delete event_messages
WHILE EXISTS (SELECT * FROM internal.operation_messages AS o JOIN #delete_candidates AS dc ON dc.operation_id = o.operation_id)
BEGIN Delete operation_messages
FROM internal.operation_messages AS T
INNER JOIN #delete_candidates AS DC
ON DC.operation_id = T.operation_id;
END Delete operation_messages
Finally, remove the entry from operations
FROM internal.operations AS T
INNER JOIN #delete_candidates AS DC
ON DC.operation_id = T.operation_id;

  1. deniscrotty

    Have you considered, or seen an example of, developing a secondary cleanup procedure that thins out the logged data after an intermediate period? I’d like to keep 370 days of retention, as there are some packages that only run annually, but after a short time, 1 or 2 weeks, I’m really only interested in post execute events, errors and the overall run time. A second procedure that thinned out the data after the 1 or 2 week period could hopefully reduce the size of this database drastically.

    • paschott

      I haven’t looked into it myself, but I imagine you could tweak the above script to handle removing the data you want to remove. You could filter out for the packages you want, set the retention to something crazy high to keep everything you need, but still keep what you want. Just set this up as its own scheduled job.

  2. deniscrotty

    Thank you for the reply. I agree that it doesn’t appear to be technically difficult, and with the way that the built in cleanup appears to be built on cascading deletes I believe that thinning the related data should actually speed that part up. I do wonder what the impact will be on the built in reports, but I’ll give it a shot in a test environment to see how it looks.

Leave a Reply

Your email address will not be published. Required fields are marked *