Shrinking is bad
First of all: shrinking a database is nothing that you should do on a regular basis, like using this step in a Maintenance Plan or setting it up yourself via SQL Server Agent Jobs.
But if the customers asks for it, especially after having implemented Page Compression to save a lot of disk space – you have to obey 😉
Of course we did not start with the production system, we already shrinked the DEV and QA system(s) of many other SAP systems without impact using the following approach:
Step 0: Document the original size and Space Usage.
Step 1: SHRINKDATABASE with NOTRUNCATE
-- Shrink database <DatabaseName> with NOTRUNCATE to move all database pages to the first most place in the physical file. DBCC SHRINKDATABASE (SID, NOTRUNCATE);
Step 2: SHRINKDATABASE with TRUNCATEONLY
-- Shrink database <DatabaseName> with TRUNCATEONLY to cut the free space at the end of the physical file. DBCC SHRINKDATABASE (SID, TRUNCATEONLY);
Here we can see a huge saving.
Step 3: Check Fragmentation
/* STEP 3 and 5 Check Fragmentation of Database for documentation purposes. */ USE <DatabaseName> DECLARE @DatabaseID int SET @DatabaseID = DB_ID() SELECT DB_NAME(@DatabaseID) AS DatabaseName, schemas.[name] AS SchemaName, objects.[name] AS ObjectName, indexes.[name] AS IndexName, objects.type_desc AS ObjectType, indexes.type_desc AS IndexType, dm_db_index_physical_stats.partition_number AS PartitionNumber, dm_db_index_physical_stats.page_count AS [PageCount], dm_db_index_physical_stats.avg_fragmentation_in_percent AS AvgFragmentationInPercent FROM sys.dm_db_index_physical_stats (@DatabaseID, NULL, NULL, NULL, 'LIMITED') dm_db_index_physical_stats INNER JOIN sys.indexes indexes ON dm_db_index_physical_stats.[object_id] = indexes.[object_id] AND dm_db_index_physical_stats.index_id = indexes.index_id INNER JOIN sys.objects objects ON indexes.[object_id] = objects.[object_id] INNER JOIN sys.schemas schemas ON objects.[schema_id] = schemas.[schema_id] WHERE objects.[type] IN('U','V') AND objects.is_ms_shipped = 0 AND indexes.[type] IN(1,2,3,4) AND indexes.is_disabled = 0 AND indexes.is_hypothetical = 0 AND dm_db_index_physical_stats.alloc_unit_type_desc = 'IN_ROW_DATA' AND dm_db_index_physical_stats.index_level = 0 AND dm_db_index_physical_stats.page_count >= 1000 ORDER BY AvgFragmentationInPercent DESC
You can see that the Fragmentation is as high as you can imagine.
Step 4: Index Maintenance
We use Ola Hallengrens Maintenance Solution for Index Optimizing, so this is simply executing the very Job in SQL Server Agent.
Now with new unallocated space as the Indexes had to be rebuilt and reorganized. That increases the size of the data file, but far away from the original size.
Step 5: Check Fragmentation again
After the Index Maintenace is done, the Fragmentation of the Indexes look like they should.
Locking, Blocking, Deadlocking
After quite a few systems, where the approach shown above worked out pretty well, we faced problems with the most important production system of the customer. We experienced Deadlocks where the customer’s employees had to sit in front of their SAP and wait for their Transactions to complete.
So we had to killl the Shrink-Job and think of another solution.
We came across the blog post written by Jürgen Thomas
with all the good recommendations, but we definitely were not allowed to have a downtime for the Shrink process as the project team assumed that it would take to long to run.
So with some additional hints from Clas Hortien (thanks, Clas 🙂 ) we were able to invent a process of shrinking the files, truncating them and so step by step releasing the unused disk space back to the file system
In this special scenario we had to deal with:
- 2.25 TB production database with 7 equally sized datafiles
- SAP ECC System (main ERP System)
- Global Customer, operates 24 x 7
- No downtime accepted for Shrinking Process
- No performance impact accepted
- Run on a daily basis between 7 pm and 11 pm. This is the timeslot where the customer accesses the system not too much.
- On occuring Deadlocks “owned” by the Shrink process: Kill the Shrink Process.
The solution contains three SQL Server Agent Jobs per Data File.
- FIT DBA SID Shrink File SIDDATA1
- FIT DBA SID Shrink File SIDDATA1 KILL and TRUNCATE
- FIT DBA SID Shrink File SIDDATA1 STOP KILL and TRUNCATE JOB
and one single “Prevent” Job, that had to find out, if deadlocks occur and kill the Jobs:
FIT DBA SID Shrink File Prevent Deadlocks
Following is the description of what the Job Steps really do. I will explain them for the first data file, the otheres are just the same.
FIT DBA SID Shrink File SIDDATA1
Try to shrink the Datafile 1 to the smallest size possible.
DBCC SHRINKFILE (SIDDATA1, 1);
FIT DBA SID Shrink File SIDDATA1 KILL and TRUNCATE
This job consists of two steps:
Kill the Shrink Job.
DECLARE @SessionID INT DECLARE @SQLStatement NVARCHAR(20) -- select current session SELECT @SessionID = r.session_id -- , r.*, t.* FROM sys.dm_exec_requests AS r CROSS APPLY sys.dm_exec_sql_text(r.sql_handle) AS t WHERE t.text LIKE '%DBCC SHRINKFILE (SIDDATA1,%' AND r.command LIKE 'Dbcc%' PRINT 'SessionID to be killed: ' + CONVERT(NVARCHAR(14),@SessionID) SET @SQLStatement = 'KILL ' + CONVERT(NVARCHAR(14),@SessionID) + ';' PRINT @SQLStatement EXEC sp_executesql @SQLStatement
Afterwards execute the TRUNCATEONLY to release the free space at the end of the data file.
DBCC SHRINKFILE (SIDDATA1, TRUNCATEONLY);
FIT DBA SID Shrink File SIDDATA1 STOP KILL and TRUNCATE JOB
This job now only got one step. It is necessary to also kill the “TRUNCATEONLY” job, because too many “running” jobs would also lead to locking and blocking.
EXEC dbo.sp_stop_job N'FIT DBA SID Shrink File SIDDATA1 KILL and TRUNCATE'
FIT DBA SID Shrink File Prevent Deadlocks
Now this is the watch dog of all Shrink jobs:
DECLARE @WhoIsActive TABLE ( SessionID INT , BlockingSessionID INT , SQLStatement NVARCHAR(max) ) INSERT INTO @WhoIsActive (SessionID, BlockingSessionID, SQLStatement) SELECT S.session_id , ISNULL(R.blocking_session_id, 0) , ST.text FROM sys.dm_exec_sessions S INNER JOIN sys.dm_exec_connections C ON S.session_id = C.session_id CROSS APPLY sys.dm_exec_sql_text(C.most_recent_sql_handle) AS ST LEFT JOIN sys.dm_exec_requests R ON S.session_id = R.session_id WHERE S.is_user_process = 1 AND S.session_id <> @@spid -- exclude itself -- ########################################################## -- ### SELECT DECLARE @BlockingShrinkSessionID INT SELECT @BlockingShrinkSessionID = SessionID FROM @WhoIsActive WHERE SQLStatement LIKE '%DBCC SHRINKFILE (SIDDATA%' IF EXISTS(SELECT 1 FROM @WhoIsActive WHERE BlockingSessionID = @BlockingShrinkSessionID) BEGIN DECLARE @JobName sysname DECLARE TheCursor CURSOR FOR SELECT name from msdb.dbo.sysjobs jobs WHERE jobs.category_id = 102 AND name like 'FIT DBA SID Shrink File SIDDATA%' OPEN TheCursor FETCH NEXT FROM TheCursor INTO @JobName WHILE @@FETCH_STATUS = 0 BEGIN DECLARE @SQLStatement NVARCHAR(256); SET @SQLStatement = 'EXEC msdb.dbo.sp_stop_job N''' + @JobName + ''';' PRINT @SQLStatement; EXEC(@SQLStatement); FETCH NEXT FROM TheCursor INTO @JobName END CLOSE TheCursor DEALLOCATE TheCursor execute msdb.dbo.sp_send_dbmail @profile_name='<PutYourMailProfileHere>', @recipients='<PutYourEMailRecipientHere>', @subject='FIT DBA SID Shrink PREVENTED FROM BEING DEADLOCKED' END
Schedule of the Jobs
So how were the Jobs scheduled?
- We started with the first job on the full hour and scheduled each job hourly.
- The “SHRINK” jobs were scheduled to run.
- The “KILL and TRUNCATE” jobs were scheduled to run 5 minutes after the “SHRINK” job.
- The “STOP KILL and TRUNCATE JOB” jobs were scheduled to run 5 minutes after the “KILL AND TRUNCATE” job.
- Between each data file we put 2 minutes “delay”.
- The “Watch Dog Job” was scheduled to run throughout the whole timewindow every 10 seconds. This is the time the customer allowed the employees to sit in front of the SAP screen.
The customer did not experience any performance impact on the production system, just a small bunch of “Prevent from being deadlocked” emails hit our inbox.
This is the historical decreasement of the size of the database:
Thank you for reading and best of luck shrinking your databases (if absolutely necessary)