To take the backup of the table either we can use Select …into or Insert …. Select statement. Here we will see which one is good to take the backup of the table.
Recently, we had taken a backup of the table.
Table size is round 10GB. What we did first we have created a backup table with
same structure and write script to load the data into the newly created backup
table.
Insert
into backup_table (List of columns)
Select
list of column from table
To load this data it took more than 45 min and
also we have observed that transaction log file grew.
Second time what we did instead of using the
above approach we have taken the other approach to take the backup of the table.
Read: Select INTO in SQL Server
https://bageshkumarbagi-msbi.blogspot.com/2016/05/select-into-in-sql-server.html
It took less time to load the data into the
backup table and also transaction log file not grew as compare to first
approach.
Let’s see why it is select … into is faster than
insert … select statement.
For the demo here we are using AdventureWorksDW2019
DB and FactProductInventory table. This table is having more than 776K records
and size of this table is 30MB.
Before taking the backup checking the log size of this database.
SELECT file_id, name, type_desc,size,(size * 8)/1024.0
AS size_in_mb FROM sys.database_files |
Now
creating table for the backup.
CREATE TABLE factproductinventory_bk_08_11_2024 ( productkey INT NOT NULL, datekey INT NOT NULL, movementdate DATE NOT NULL, unitcost MONEY NOT NULL, unitsin INT NOT NULL, unitsout INT NOT NULL, unitsbalance INT NOT NULL ) |
Table created successfully
We have enabled ‘STATISTICS TIME’ and ‘STATISTICS IO’ to view query (execution time and CPU) and disk I/O (read and writes) statistics. I also tracked the log space usage by looking at the ‘DBCC SQLPERF (LOGSPACE)’ view.
Now loading the data.
As of now
select Operation,count(*) as [Number of Records] from fn_dblog(null,null) where
Operation in ('LOP_INSERT_ROWS','LOP_MODIFY_ROW') group by
Operation order by 2 desc |
Now inserting the records.
insert into factproductinventory_bk_08_11_2024 (productkey,datekey,movementdate,unitcost,unitsin,unitsout,unitsbalance) select productkey,datekey,movementdate,unitcost,unitsin,unitsout,unitsbalance
from factproductinventory |
Data inserted successfully
It took 4500 MS to insert the records. Logical
read is 7780128.
See the log details
Log
space use % also increased from 2.33% to 38.36%.
See the 776K records are written into the
transaction log.
See the records into the table
Let’s the other approach
select * into
factproductinventory_BK_08112024
from factproductinventory |
See here it took around 1200 MS to insert
records into backup table and Logical read is 3860.
See the transaction log details.
Nothing written to the log.
No changes on the log space used %.
This is one reason that it is faster.
Se the records in the backup table.
No comments:
Post a Comment
If you have any doubt, please let me know.