Central Subscriber Model Explained

The majority of SQL Server Replication topologies are based on the Central Publisher model, which is comprised of a single publisher replicating to one or more subscribers.  Another replication model, which is sometimes overlooked, is the Central Subscriber model, which is comprised of multiple publishers replicating to one subscriber using Transactional Replication.

Central Subscriber Model

The Central Subscriber model is useful for rolling up or consolidating data from multiple sources.  Some examples include:

  • Rolling up inventory from several warehouses into a central server at corporate headquarters.
  • Sending data from remote offices within a company to a central office for business continuity.
  • Consolidating order information to one location for centralized ordering.

Priming the pump

By default, subscriptions are initialized from a snapshot generated by the Snapshot Agent and then applied by the Distribution Agent.  When the snapshot is applied, by default the article property Action if name is in use is set to Drop existing object and create a new one, which instructs that the destination table be dropped if it already exists at a subscriber.  This behavior can be problematic in the Central Subscriber model when snapshots are applied since snapshots must be applied from multiple publications.  The first snapshot is applied as expected, however, subsequent snapshot applications result in the previous snapshot data being wiped out.

The solution to this problem is horizontal partitioning, static row filters, and setting the Action if name is in use article property to Delete data. If article has a row filter, delete only data that matches the filter.

Horizontal partitioning

Ideally, published tables in a Central Subscriber topology will be horizontally partitioned.  In order to horizontally partition the tables to be published, a location-specific column should be added and included as a part of a composite primary key.  Consider a table that looks like this:

CREATE TABLE TestTable
(
    ID int IDENTITY(1,1) NOT NULL,
    FirstName varchar(100) NULL,
    CONSTRAINT PK_TestTable_ID PRIMARY KEY CLUSTERED (ID ASC)
)

To horizontally partition TestTable and prepare it for a Central Subscriber configuration at Publisher 1, drop primary key constraint PK_TestTable_ID, add a location-specific column named LocationID with a default value of 1, and add the new composite primary key including the LocationID column.

ALTER TABLE TestTable
DROP CONSTRAINT PK_TestTable_ID
GO
 
ALTER TABLE TestTable
ADD LocationID INT NOT NULL DEFAULT(1)
GO
 
ALTER TABLE TestTable
ADD CONSTRAINT PK_TestTable_ID_LocationID PRIMARY KEY CLUSTERED (ID, LocationID)
GO

Next, to horizontally partition TestTable and prepare it for a Central Subscriber configuration at Publisher 2, the same preparation can be done with a default value of 2 for LocationID.

ALTER TABLE TestTable
DROP CONSTRAINT PK_TestTable_ID
GO
 
ALTER TABLE TestTable
ADD LocationID INT NOT NULL DEFAULT(2)
GO
 
ALTER TABLE TestTable
ADD CONSTRAINT PK_TestTable_ID_LocationID PRIMARY KEY CLUSTERED (ID, LocationID)
GO

Finally, to horizontally partition TestTable and prepare it for a Central Subscriber configuration at Publisher 3, the same preparation can be done with a default value of 3 for LocationID.

ALTER TABLE TestTable
DROP CONSTRAINT PK_TestTable_ID
GO
 
ALTER TABLE TestTable
ADD LocationID INT NOT NULL DEFAULT(3)
GO
 
ALTER TABLE TestTable
ADD CONSTRAINT PK_TestTable_ID_LocationID PRIMARY KEY CLUSTERED (ID, LocationID)
GO

Once the tables are horizontally partitioned, they can be properly published in a Central Subscriber topology by using static row filters, filtering on the LocationID column and setting the article property Action if name is in use to Delete data. If article has a row filter, delete only data that matches the filter.

Static row filters

For each article to be published in a Central Subscriber topology, a static row filter should be defined to leverage the Action if name is in use article property appropriately.  A static row filter uses a WHERE clause to select the data to be published.  To publish rows from Publisher 1, specify LocationID = 1 for the filter clause. Likewise, to publish rows from Publisher 2 and Publisher 3, specify LocationID = 2 and LocationID = 3 for the filter clause, respectively.

Static Row Filters

Action if name is in use

When creating the publications and adding articles, the article property Action if name is in use needs to be set to Delete data. If article has a row filter, delete only data that matches the filter.  This can be set using the New Publication Wizard Article Properties dialog or by using replication stored procedures sp_addarticle and specifying a value of delete for the @pre_creation_cmd argument.  This way, when the central subscriber is initialized or reinitialized from multiple publication snapshots, previously applied snapshot data will be preserved since only data matching the filter clause will be deleted.

Action if name is in use

The caveat

As we can see, horizontal partitioning requires that tables have a location-specific column added, however, the location-specific column does not necessarily need to be included as a part of the primary key at the publication databases.  In addition, it is not a hard requirement that published tables in a Central Subscriber topology be horizontally partitioned.  In some shops, changing a primary key or adding additional columns is strictly prohibited, in which case I would urge you to take an alternative approach.  If you would like some ideas on implementing a Central Subscriber topology without modifying primary keys or horizontally partitioning publication databases, feel free to get in touch or leave a comment below.

-Brandon Williams (blog | linkedin | twitter)

Chad Churchwell: Protect Your Replication Environment with AlwaysOn Availability Groups

Heads up everyone.  Next Tuesday January 22nd, 2013 Chad Churchwell (blog | twitter) will be presenting a webinar on behalf of Pragmatic Works on protecting your replication environment with AlwaysOn Availability Groups.  I've had the pleasure of following Chad over the last year and have really come to appreciate his dedication and commitment to providing quality replication content to the SQL Server Community.  According to his author profile at MSSQLTips:

Chad Churchwell is a SQL Server professional specializing in High Availability, Disaster Recovery, and Replication. He has been in IT for 14 years, working with SQL Server for 10 years, and is currently a senior DBA Consultant with Pragmatic Works. He is active in the community speaking at several SQL Saturday events, as well as maintaining a blog at www.sqlchad.com.

With the release of SQL Server 2012, replication received support for AlwaysOn Availability Groups.  Come listen to Chad Churchwell discuss the supported scenarios and how to achieve high availability for your replication topologies.  We should be in for a treat and I hope to see you all there.

Protect Your Replication Environment with AlwaysOn Availability Groups
 
Registration: Protect Your Replication Environment with AlwaysOn Availability Groups

Date and time:  January 22nd 2013 11:00 am Eastern

Abstract:

This session is intended to show how you can combine AlwaysOn Availability Groups to protect your replication environment for High Availability. We will cover supported scenarios for replication, publisher and subscriber setup, and different failure points in your replication topology. We will conclude with a demo on failover of a publisher and subscriber.

-Brandon Williams (blog | linkedin | twitter)

Auditing changes in Merge Replication

One trick I have learned from the folks on the Replication Support Team is how to proactively audit data changes in Merge Replication.  This is useful when troubleshooting issues such as identifying where data changes are coming from, who is causing them, and what the data is before and after the change.  I have found this valuable information to have on a few occasions and thought I would share.

Keep in mind this should only be used for troubleshooting purposes and should always be tested in pre-production prior to deploying.  Make sure it works first!

Auditing data changes for a Merge article can be done by creating insert, update, and delete triggers to capture data changes and record them into an audit table.  The audit table rows consist of GETDATE(), APP_NAME(), HOST_NAME(), SUSER_NAME(), column data, action type, command, and spid for each insert, update, and delete that occurs on the article to audit.  You will have to modify the script to adjust the name of the table being audited and the relevant columns that you think should be included in the audit data.  Usually just the primary key columns are enough, but other columns can be included as well.

Audit script

/************************************************************************
This is a script to implement audit triggers of insert, update, delete on 
a base table. It is based on a generic scenario for a table with four 
columns col1, col2, col3, col4.
 
It will insert into a table called source_audit.
 
Run this script on the database you would like to audit.
 
Test first to be sure it is working as expected!
 
After the problem occurs, export the contents of the source_audit table.
************************************************************************/
 
USE DB_to_audit
GO
 
-- Drop audit table if exists
IF  EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[source_audit]') AND type in (N'U'))
DROP TABLE [dbo].[source_audit]
 
-- Create audit table
CREATE TABLE [dbo].[source_audit]
    ([tstamp] datetime NULL, 
     [ProgramName] nvarchar(128) NULL, 
     [hostname] nvarchar(128) NULL, 
     [suser] nvarchar(128) NULL,
     [col1] int ,
     [col2] nchar(10) NULL,
     [col3] datetime NULL,
     [col4] binary NULL,
     [actiontype] char(2) NULL,
     [inputbuffer] nvarchar(255) NULL,
     [spid] int NULL)
GO
 
-------------------------------------------
-- INSERT trigger
-------------------------------------------
 
-- Delete trigger if exists
IF  EXISTS (SELECT * FROM sys.triggers WHERE object_id = OBJECT_ID(N'[dbo].[audit_source_INS]'))
DROP TRIGGER [dbo].[audit_source_INS]
GO
 
-- Create INSERT trigger
CREATE TRIGGER audit_source_INS
on Table_1
FOR INSERT
AS
DECLARE @command NVARCHAR(255)
 
if 0 = (select count(*) from inserted) return
 
CREATE TABLE #InputBuffer ([eventtype] NVARCHAR(30), [parameters] INT, [eventinfo] NVARCHAR(255))
INSERT INTO #InputBuffer exec('DBCC INPUTBUFFER(@@spid) WITH NO_INFOMSGS')
SELECT @command=eventinfo from #InputBuffer
 
INSERT INTO source_audit
SELECT getdate(), app_name(), host_name(), suser_name(), col1, col2, col3, col4, 'I', @command, @@SPID
FROM inserted
GO
 
-------------------------------------------
-- UPDATE trigger
-------------------------------------------
 
-- Delete trigger if exists
IF  EXISTS (SELECT * FROM sys.triggers WHERE object_id = OBJECT_ID(N'[dbo].[audit_source_UPD]'))
DROP TRIGGER [dbo].[audit_source_UPD]
GO
 
-- Create UPDATE trigger
CREATE TRIGGER audit_source_UPD
on Table_1
FOR UPDATE
AS
DECLARE @command NVARCHAR(255)
 
if 0 = (select count(*) from inserted) return
 
CREATE TABLE #InputBuffer ([eventtype] NVARCHAR(30), [parameters] INT, [eventinfo] NVARCHAR(255))
INSERT INTO #InputBuffer exec('DBCC INPUTBUFFER(@@spid) WITH NO_INFOMSGS')
SELECT @command=eventinfo from #InputBuffer
 
INSERT INTO source_audit
SELECT getdate(), app_name(), host_name(), suser_name(), col1, col2, col3, col4, 'UD', @command, @@SPID
FROM deleted
INSERT INTO source_audit
SELECT getdate(), app_name(), host_name(), suser_name(), col1, col2, col3, col4, 'UI', @command, @@SPID
FROM inserted
GO
 
-------------------------------------------
-- DELETE trigger
-------------------------------------------
 
-- Delete trigger if exists
IF  EXISTS (SELECT * FROM sys.triggers WHERE object_id = OBJECT_ID(N'[dbo].[audit_source_DEL]'))
DROP TRIGGER [dbo].[audit_source_DEL]
GO
 
-- Create DELETE trigger
CREATE TRIGGER audit_source_DEL
on Table_1
FOR INSERT
AS
DECLARE @command NVARCHAR(255)
 
if 0 = (select count(*) from inserted) return
 
CREATE TABLE #InputBuffer ([eventtype] NVARCHAR(30), [parameters] INT, [eventinfo] NVARCHAR(255))
INSERT INTO #InputBuffer exec('DBCC INPUTBUFFER(@@spid) WITH NO_INFOMSGS')
SELECT @command=eventinfo from #InputBuffer
 
INSERT INTO source_audit
SELECT getdate(), app_name(), host_name(), suser_name(), col1, col2, col3, col4, 'D', @command, @@SPID
FROM deleted
GO

Examining the results

Once the audit table and triggers are in place we can begin collecting audit data.  The audit data can be exported from the audit table after reproducing the problem to be queried at a later place and time, or it can be queried directly.  Here is a sample audit of a Merge publisher and the audit data after an update and sync from subscriber WS2008R2_1.

SELECT tstamp, ProgramName, hostname, suser,
col1, actiontype, inputbuffer, spid
FROM dbo.source_audit


 
Using this approach, we can identify where data changes are coming from, who is causing them, and what the data is before and after the change.  This can be very useful information to have, especially when troubleshooting conflicts and determining where the conflicting changes are originating — but I will save that for a future post.  If you would like help implementing an auditing scheme in your Merge topology, feel free to drop me a line or leave a comment below.

-Brandon Williams (blog | linkedin | twitter)

Implementing a replication agent progess bar

Using Replication Management Objects, SQL Server subscriptions can be synchronized programmatically without using SQL Server Agent or SQL Server Management Studio.  Common uses include:

  • Express edition subscribers
  • Sync from within an application on-demand without an agent job
  • Display agent status information in an application

In a previous post I provided a link to a code sample showing how to synchronize a Merge pull subscription in SQL Server Express using RMO.  Taking this a step further, for this post I will discuss how to implement a Merge Agent progress bar during synchronous execution, handling the MergeSynchronizationAgent.Status event and displaying the results in a Windows Form.

Synchronously synchronize asynchronously

When using the MergeSynchronizationAgent.Synchronize method to synchronize the Merge Agent it is important to realize that this starts the agent synchronously and control remains with the running agent until it completes.  This can make updating UI controls a bit tricky as UI events can be delayed until the Merge Agent finishes synchronizing, which is not very useful.  The key to making this work smoothly is to use a BackgroundWorker to synchronize the agent on a separate thread asynchronously and report progress back to the main UI thread when the MergeSynchronization.Status event is raised.

From here the synchronization BackgroundWorker.DoWork event handler can subscribe to the MergeSynchronizationAgent.Status event and start the Merge Agent synchronously for a specified subscription.

Status Event

The MergeSynchronizationAgent.Status event handler reports the Merge Agent progress passing StatusEventArgs PercentCompleted and Message back to the main UI thread which is handled by the synchronization BackgroundWorker.ProgressChanged event handler.

ProgressChanged Event

Finally, the synchronization BackgroundWorker.ProgressChanged event handler smoothly updates the progress bar and text box controls according to the Merge Agent status.

Sample

This code sample can be downloaded from the MSDN Code Gallery:

If you have any questions about the sample, or would like help integrating this into your application, feel free to ping me or leave a comment below.

-Brandon Williams (blog | linkedin | twitter)

PASS DBA Virtual Chapter: Replication with Hilary Cotter

I’m pleased to announce that this Wednesday April 11th 2012 Hilary Cotter will be presenting for the PASS DBA Virtual Chapter on SQL Server Replication.  Replication is a powerful and flexible data distribution solution that has become one of the more mature features in SQL Server.  Come listen to Microsoft MVP and replication expert Hilary Cotter discuss SQL Server Replication.  We should be in for a treat and I hope to see you all there.

PASS DBA Virtual Chapter Live Meeting Eventhttp://dba.sqlpass.org/

Date and time:  April 11th 2012 12:00 pm Mountain time

Topic:  SQL Server Replication – sponsored by Quest Software

Presenter:  Hilary Cotter

Abstract:  Replication is a native SQL Server component which allows you to copy data from one database or server to another and can be configured to replicate bi-directionally.  In this webcast, SQL Server MVP., Hilary Cotter discusses replication types as well as common use cases and problems, as well as troubleshooting and monitoring and how to squeeze optimal performance from merge, transactional and bi-directional replication.

Bio:  Hilary Cotter is an industry veteran and has been a SQL Server MVP for 11 years. He specializes in replication, full-text search and SQL Server Service Broker. He has worked for many fortune 500 companies implementing cutting edge replication solutions. He has written and co-authored several books.

Live Meeting link:  The Live Meeting link for the webcast and meeting archive can be found on the PASS DBA Virtual Chapter website.

Synchronizing subscriptions in SQL Server Express

I was recently asked about synchronizing a Merge pull subscription in SQL Server Express using RMO, the lack of a SQL Server Agent, and what to do about it.  Since SQL Server Express does not include the SQL Server Agent, pull subscriptions must be synchronized by some other means.  The problem becomes evident when trying to open the View Synchronization Status dialog for a pull subscription in SQL Server Express.

View Synchronization Status

The View Synchronization Status dialog depends on a SQL Server Agent job which does not exist in SQL Server Express.  To accommodate for the lack of a SQL Server Agent, SQL Server Express Merge pull subscriptions can also be synchronized by executing batch scripts, Windows Synchronization Manager, or Replication Management Objects (RMO).  While not optimal without the SQL Server Agent, plenty of options are available. This is where it really pays to know the different ways a subscription can be synchronized when tasked with an Express subscriber.

Batch script

The Merge Agent (replmerg.exe) executable can be run from the command line on-demand or from a batch script as a scheduled task.  To run from the command line, execute replmerg.exe from the COM folder.

replmerg.exe

Likewise, this can be saved as a batch file and run from Task Scheduler on a schedule.  This option alone provides a pretty good replacement for the SQL Server Agent and synchronizing subscriptions in Express.

Windows Synchronization Manager

Another option to synchronize pull subscriptions in SQL Server Express is Windows Synchronization Manager, or Sync Center.  Sync Center provides options for setting and viewing sync partnerships, setting subscription properties and sync schedules, and viewing sync results and conflicts.  This tool is ideal for having non-technical users synchronize SQL Server Express pull subscriptions on-demand as it offers a user-friendly interface.  Synchronization schedules can also be configured as needed.

Windows Synchronization Manager

RMO

For all the developers, Replication Management Objects (RMO) can be used to synchronize SQL Server Express Merge pull subscriptions through managed code access.  The RMO MergeSynchronizationAgent class exposes a Synchronize method which can be used to synchronize a subscription without an agent job — additional properties must be supplied.  A code sample demonstrating how to synchronize a SQL Server Express Merge pull subscription using RMO can be downloaded from the MSDN Code Gallery.

So

As we can see, there are options for synchronizing pull subscriptions in SQL Server Express.  That is because Replication was built with SQL Server Express subscribers in mind.  Pull subscriptions can be synchronized using batch scripts, Windows Synchronization Manager, and RMO which should be sufficient enough to synchronize the data.  If you happen to know of another way to synchronize SQL Server Express pull subscriptions and would like to share, feel free to leave a comment below.

Executing scripts with sp_addscriptexec

Looking for ways to do things more quickly and efficiently, I thought I would talk a little bit about sp_addscriptexec and how it can be leveraged to expedite the process of executing SQL scripts in a replication topology.  By creating and saving a SQL script on a publisher, it can then be distributed to and executed on UNC/FTP deployed subscribers using sp_addscriptexec.

The syntax for sp_addscriptexec per BOL

sp_addscriptexec [ @publication = ] publication
[ , [ @scriptfile = ] 'scriptfile' ]
[ , [ @skiperror = ] 'skiperror' ]
[ , [ @publisher = ] 'publisher' ]

The problem with numbers

If you’re dealing with a large number of subscribers, database administration can be tricky.  Tasks such as adding logins and users, granting permissions, maintaining indexes, and managing constraints must be done individually at each node and deploying all of the scripts can be very time consuming.  Rather than visit each node, sp_addscriptexec should be used to post the ad-hoc script to all subscribers in the topology, saving valuable time.  Put simply – if you’re visiting each node to execute a script, you’re doing it wrong.

The benefit with using sp_addscriptexec is that the publisher can act as a centralized script repository.  Scripts can be saved to the publisher and executed on demand for subscribers.  This process is quicker and more efficient than copying, saving, and executing scripts directly at each subscriber.  Not only does this save time, but space as well.

Executing scripts

Applying scripts to subscribers from the publisher can be done by:

  • Create and test the script
  • Save the script to the publisher
  • Execute sp_addscriptexec at the publisher to apply script to subscribers

From here, the script will be executed at each subscriber on the next synchronization.  Taking this one step further — to limit a script to run on only certain subscribers, a check for HOST_NAME can be performed in the script.

-- Limit script to Subscriber1, Subscriber2, and Subscriber3
IF HOST_NAME() IN ('Subscriber1', 'Subscriber2', 'Subscriber3')
	-- script goes here

Something to consider

A caveat with sp_addscriptexec is that by default, if the script being executed encounters an error at the subscriber, the replication agent will fail and retry on subsequent syncs in an endless loop, be aware of this.  This behavior can be overridden by passing a value of 1 for @skiperror which instructs the agent to skip errors.  At the very least, SQL scripts should be tested thoroughly at the publisher before calling sp_addscriptexec to apply them to subscribers.

Arithmetic Overflow in sp_MSarticle_validation

This is probably a rare occurrence — but I've noticed that data validation for an article with 10 billion or more rows will fail due to an arithmetic overflow error in sp_MSarticle_validation.

Data validation for a specific article in a transactional publication is done using sp_article_validation which in turn calls sp_MSarticle_validation.  In the definition of sp_MSarticle_validation, a local variable named @actual_rowcount is defined which is of the type bigint.  Later on in sp_MSarticle_validation, a command is built to execute sp_table_validation and the @expected_rowcount parameter passed in is derived by converting @actual_rowcount to a varchar(10).

The offending statement can be observed in the following code snippet from sp_MSarticle_validation:

select @command = 'exec dbo.sp_table_validation @table = ''' + replace(@destination_table, '''', '''''')  + ''', @expected_rowcount = ' +
	        convert(varchar(10), @actual_rowcount) + ', @expected_checksum = ' +
	        convert(varchar(100), @actual_checksum) + ', @rowcount_only = ' + convert(varchar(5),@rowcount_only) +
	        ', @full_or_fast = ' + convert(varchar(10), @full_or_fast) +
	        ', @shutdown_agent = ' + convert(varchar(10), @shutdown_agent)

Considering @actual_rowcount is a bigint, it should be converted to a varchar(19), rather than a varchar(10).  This is where an arithmetic overflow error occurs when validating an article that has 10 billion or more rows, causing validation to fail.

If you find yourself needing to validate an article with 10 billion or more rows, please vote this item as important to expedite a fix.

Creating merge replication conflict alerts

One of the challenges in working with Merge Replication is handling conflicts since writes can occur at any node in a topology. In an earlier post I demonstrated how to handle conflicts using the business logic handler framework - now I would like to show how to create Merge Replication conflict alerts based on the Performance Monitor counter SQLServer:Replication Merge Conflicts/sec.

Ideally proper planning would be done in an application to minimize the chances of conflicts occurring, whether that is achieved through column-level tracking, partitioning the writes, filtering, or some combination.  However sometimes application changes can introduce conflicts and having alerts in place is a smart precaution to take.

To setup a conflict alert - identify the Performance Monitor counter SQLServer:Replication Merge Conflicts/sec instance name to monitor.  In the Add Counters dialog in Performance Monitor the instance name can be identified for the Merge conflicts counter for a publication:

Add Counters Dialog

For this example the instance name that I will monitori is WS2008R2_1-AdventureWorksLT-TestMergePub1-WS2008R2_1-49.

From here sp_add_alert can be used to specify a merge conflict performance condition using our instance name to alert us when a conflict arises.  This can be done with the following bit of T-SQL:

USE msdb
GO
 
EXEC sp_add_alert @name=N'Merge Conflict Alert',
        @message_id=0,
        @severity=0,
        @enabled=1,
        @delay_between_responses=0,
        @include_event_description_in=0,
        @category_name=N'[Uncategorized]',
        @performance_condition=N'SQLServer:Replication Merge|Conflicts/sec|WS2008R2_1-AdventureWorksLT-TestMergePub1-WS2008R2_1-49|>|0'

This can also be modified to raise alerts only when a certain threshold is met to suit your needs.  An alert response can be set to send an email when the conflict performance condition is met which would give a much needed heads-up when things start to go awry.  I hope this provides a technique to detect precisely when conflicts occur and a head start on tracking down the culprit.

Error when creating a subscription

When creating a subscription in SQL Server sometimes the following error will occur:

The remote server "%s" does not exist, or has not been designated as a valid Publisher, or you may not have permission to see available Publishers.

The error is pretty straightforward and indicates that the replication agent login used to connect to the Publisher is not a member of the publication access list (PAL).  That would be the Distribution Agent process account for Transactional Replication and the Merge Agent process account for Merge Replication.  Add the agent process account to the PAL and try to create the subscription again.

Publication Access List