Last week in of the SQL Community Forum, one of my friend asked the following question

If I have 2 Tlog files for one database, how will these files fillup?

A) File 1 will be filled first and then File 2
B) Both the files will be filled together.

If B is true then how will sequentially be maintained?

The reason this question was asked since he observed that if he has multiple logfiles, if he performed a single bulk insert operation, both the log files were 50% filled rather than single log file 100 % filled.

If you are looking for the reason why this would happened please look at the end of post but in this blog post, I would like to further show how can we study the internals of SQL Server with the help of Xevents. To monitor the file writes we can use the sqlserver.file_write_completed Xevents to understand how SQL Server performs the sequential write operation on each SQL Server Log File in case of multiple Log files.

For my testing, I reused the same example used by Jonathan Kehayias in his blog where he explain the proportional fill algorithm

Create a Database with single Data File & 5 Log Files & a single empty table in which we perform the bulk insert of database.

 
USE [master]
GO
/****** Object: Database [MultipleLogFiles] Script Date: 5/30/2015 4:54:10 PM ******/
CREATE DATABASE [MultipleLogFiles]
CONTAINMENT = NONE
ON PRIMARY
( NAME = N'MultipleLogFiles', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\MultipleLogFiles.mdf' , SIZE = 5316608KB , MAXSIZE = UNLIMITED, FILEGROWTH = 0)
LOG ON
( NAME = N'MultipleLogFiles_log_1', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\MultipleLogFiles_log_1.ldf' , SIZE = 1048576KB , MAXSIZE = 2048GB , FILEGROWTH = 0),
( NAME = N'MultipleLogFiles_log_2', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\MultipleLogFiles_log_2.ldf' , SIZE = 1048576KB , MAXSIZE = 2048GB , FILEGROWTH = 0),
( NAME = N'MultipleLogFiles_log_3', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\MultipleLogFiles_log_3.ldf' , SIZE = 1048576KB , MAXSIZE = 2048GB , FILEGROWTH = 0),
( NAME = N'MultipleLogFiles_log_4', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\MultipleLogFiles_log_4.ldf' , SIZE = 1048576KB , MAXSIZE = 2048GB , FILEGROWTH = 0),
( NAME = N'MultipleLogFiles_log_5', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\MultipleLogFiles_log_5.ldf' , SIZE = 1048576KB , MAXSIZE = 2048GB , FILEGROWTH = 0)
GO

IF OBJECT_ID('SalesOrderHeader') IS NOT NULL
DROP TABLE [dbo].[SalesOrderHeader]
GO

SET NOCOUNT ON
GO

CREATE TABLE [dbo].[SalesOrderHeader](
[SalesOrderID] [int] IDENTITY(1,1) NOT NULL PRIMARY KEY,
[RevisionNumber] [tinyint] NOT NULL,
[OrderDate] Date NOT NULL,
[DueDate] Date NOT NULL,
[ShipDate] Date NULL,
[Status] [tinyint] NOT NULL,
[OnlineOrderFlag] [bit] NOT NULL,
[SalesOrderNumber] [nvarchar](25) NOT NULL,
[PurchaseOrderNumber] [nvarchar](25) NULL,
[AccountNumber] [nvarchar](15) NULL,
[CustomerID] [int] NOT NULL,
[SalesPersonID] [int] NULL,
[TerritoryID] [int] NULL,
[BillToAddressID] [int] NOT NULL,
[ShipToAddressID] [int] NOT NULL,
[ShipMethodID] [int] NOT NULL,
[CreditCardID] [int] NULL,
[CreditCardApprovalCode] [varchar](15) NULL,
[CurrencyRateID] [int] NULL,
[SubTotal] [money] NOT NULL,
[TaxAmt] [money] NOT NULL,
[Freight] [money] NOT NULL,
[TotalDue] [money] NOT NULL,
[Comment] [nvarchar](128) NULL,
[rowguid] [uniqueidentifier] NOT NULL,
[ModifiedDate] Date NOT NULL
)
GO

Create an Xevent Session to capture checkpoint_begin, checkpoint_end, file_written, file_write_completed, database_log_flush Xevents during the bulk insert operation

IF EXISTS(SELECT * FROM sys.server_event_sessions WHERE name='MultipleLogFiles')
DROP EVENT SESSION [MultipleLogFiles] ON SERVER;

DECLARE @sqlcmd nvarchar(4000) = '
CREATE EVENT SESSION MultipleLogFiles
ON SERVER
ADD EVENT sqlserver.checkpoint_begin
( WHERE (sqlserver.database_id = '+ cast(DB_ID() as varchar(3))+')),
ADD EVENT sqlserver.checkpoint_end
( WHERE (sqlserver.database_id = '+ cast(DB_ID() as varchar(3))+')),
ADD EVENT sqlserver.file_written
( WHERE (sqlserver.database_id = '+ cast(DB_ID() as varchar(3))+')),
ADD EVENT sqlserver.file_write_completed
( WHERE (sqlserver.database_id = '+ cast(DB_ID() as varchar(3))+')),
ADD EVENT sqlserver.databases_log_flush
( WHERE (sqlserver.database_id = '+ cast(DB_ID() as varchar(3))+')),
ADD TARGET package0.asynchronous_file_target(
SET filename=''C:\XEvents\MultipleLogFiles.xel'',
metadatafile=''C:\XEvents\MultipleLogFiles.xem'')
WITH (MAX_MEMORY = 8MB, EVENT_RETENTION_MODE = ALLOW_SINGLE_EVENT_LOSS, TRACK_CAUSALITY = ON, MAX_DISPATCH_LATENCY=5SECONDS)'

EXEC (@sqlcmd)

Start the Xevent, Perform the Bulk Insert & checkpoint to flush the dirty buffers to data files & finally STOP the Xevent Session

ALTER EVENT SESSION MultipleLogFiles
ON SERVER
STATE=START
GO
-- Load data into the test database
INSERT INTO dbo.SalesOrderHeader
SELECT RevisionNumber,
DATEADD(DD, 1126+number, OrderDate),
DATEADD(DD, 1126+number, DueDate),
DATEADD(DD, 1126+number, ShipDate),
soh.Status, OnlineOrderFlag, SalesOrderNumber,
PurchaseOrderNumber, AccountNumber,
CustomerID, SalesPersonID, TerritoryID,
BillToAddressID, ShipToAddressID,
ShipMethodID, CreditCardID, CreditCardApprovalCode,
CurrencyRateID, SubTotal, TaxAmt, Freight,
TotalDue, Comment, rowguid,
DATEADD(DD, 1126+number, ModifiedDate)
FROM AdventureWorks2012.Sales.SalesOrderHeader AS soh
CROSS JOIN master.dbo.spt_values AS sv
WHERE sv.type = N'P'
AND sv.number > 0 AND sv.number < 6
GO 50
-- Flush all dirty pages to disk
CHECKPOINT
GO
-- Stop the Event Session
ALTER EVENT SESSION MultipleLogFiles
ON SERVER
STATE=STOP
GO

Load the Xevent data from xel file into a Table & later extract the XML eventdata into another table called MultipleLogFileResultsParsed for reporting

IF OBJECT_ID('MultipleLogFileResults') IS NOT NULL
DROP TABLE MultipleLogFileResults
GO
IF OBJECT_ID('MultipleLogFileResultsParsed') IS NOT NULL
DROP TABLE MultipleLogFileResultsParsed
GO
-- Create results table to load data from XE files
CREATE TABLE MultipleLogFileResults
(RowID int identity primary key, event_data XML)
GO
-- Load the event data from the file target
INSERT INTO MultipleLogFileResults(event_data)
SELECT
CAST(event_data AS XML) AS event_data
FROM sys.fn_xe_file_target_read_file('C:\XEvents\MultipleLogFiles*.xel',
'C:\XEvents\MultipleLogFiles*.xem',
null, null)
GO
-- Parse the event data
SELECT
RowID,
event_data.value('(event/@name)[1]', 'varchar(50)') AS event_name,
DATEADD(hh,
DATEDIFF(hh, GETUTCDATE(), CURRENT_TIMESTAMP),
event_data.value('(event/@timestamp)[1]', 'datetime2')) AS [timestamp],
COALESCE(event_data.value('(event/data[@name="database_id"]/value)[1]', 'int'),
event_data.value('(event/action[@name="database_id"]/value)[1]', 'int')) AS database_id,
event_data.value('(event/data[@name="mode"]/text)[1]', 'nvarchar(4000)') AS [mode],
event_data.value('(event/data[@name="file_handle"]/value)[1]', 'nvarchar(4000)') AS [file_handle],
event_data.value('(event/data[@name="offset"]/value)[1]', 'bigint') AS [offset],
event_data.value('(event/data[@name="page_id"]/value)[1]', 'int') AS [page_id],
event_data.value('(event/data[@name="file_id"]/value)[1]', 'int') AS [file_id],
event_data.value('(event/data[@name="file_group_id"]/value)[1]', 'int') AS [file_group_id],
event_data.value('(event/data[@name="wait_type"]/text)[1]', 'nvarchar(100)') AS [wait_type],
event_data.value('(event/data[@name="duration"]/value)[1]', 'bigint') AS [duration],
event_data.value('(event/action[@name="sql_text"]/value)[1]', 'nvarchar(4000)') AS [sql_text],
event_data.value('(event/data[@name="cpu"]/value)[1]', 'int') AS [cpu],
event_data.value('(event/data[@name="reads"]/value)[1]', 'bigint') AS [reads],
event_data.value('(event/data[@name="writes"]/value)[1]', 'bigint') AS [writes],
CAST(SUBSTRING(event_data.value('(event/action[@name="attach_activity_id"]/value)[1]', 'varchar(50)'), 1, 36) AS uniqueidentifier) as activity_id,
CAST(SUBSTRING(event_data.value('(event/action[@name="attach_activity_id"]/value)[1]', 'varchar(50)'), 38, 10) AS int) as event_sequence
INTO MultipleLogFileResultsParsed
FROM MultipleLogFileResults
ORDER BY Rowid

To analyze the Xevent data, I used Power view which is my favorite tool for Data Analytics & Reporting.

LogWrites

LogWritesfiltered

From the above screenshot we observe the following

  • When the transaction started, it started filling the Transaction Log 5 (in this case file id 6)
  • Next it sequentially started filling Transaction Log 1 file id 2
  • Next it sequentially started filling Transaction Log 2 file id 3
  • Next it sequentially started filling Transaction Log 3 file id 4
  • Lastly we see writes on Data File id 1 since we fired CHECKPOINT at the end of transaction to flush & write all dirty buffers to Data Files

At this point, if we fire DBCC LOGINFO or looks at the size of Tlog, it might appear why  SQL Server skipped filling of  transaction log 4 (file id 5) & filled transaction log 5 (file id 6) instead. In fact after seeing the above data you would understand that when the transaction started the current active VLF was somewhere in the middle of the Transaction log 5 (file id 6) from where it started filling the logs & sequentially moved to 1st tlog file (file id 2) , 2nd tlog file (file id 3) & then 3rd Tlog file (file id 4).

The following is the snapshot of DBCC LOGINFO at the end of transaction.

LogInfo

Hence to answer my friend’s question.

Yes SQL Server writes sequentially into VLFs of single file & then moves to the next file in round robin manner & loops back to first file after the Tlog is truncated.

So why did we observe that both the Tlog were 50% used rather single transaction file 100% file?

This is because in his case, when the transaction started the current active VLF file would be in the middle of Tlog file id 1 & hence when the data flush reached end of file Tlog 1, it sequentially moved to following VLFs in second file & filled 50% on first file & 50 % on second file.

As observed in the output of DBCC LOGINFO earlier, when we have multiple log files, the VLFs are ordered sequentially from 1st log file until the last one, hence if the current active VLF (status 2) is somewhere in between of the Tlog, after reaching the end of 1st file, it moves to the next sequential VLF (FSeqNo in DBCC LOGINFO) which at the start of second Log file & not at the start of 1st Logfile which is why both the Log files are partially filled.

Further after the above snapshot of DBCC LOGINFO, when we perform next few transactions, the current active VLF is 92… after 92 is filled, SQL Server moves to the next VLF & renames the FSeqNo of VLF from 53 to 93, status changes to 2 & parity changes to 128.

Loginfo2

Hope this explains the sequential writing of Transaction Log

Parikshit Savjani
Premier Field Engineer

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *