...

Best practices ®

by user

on
Category: Documents
63

views

Report

Comments

Transcript

Best practices ®
IBM® DB2® for Linux®, UNIX®, and Windows®
®
Best practices
Combining IBM DB2 pureScale™
Feature with Q Replication for
Scalability and Business Continuity
Anand Krishniyer
Replication Server Center of Competency
Jerome Colaco
DB2 System Verification Test
Serge Bourbonnais
Lead Architect, Replication Server
Aslam Nomani
DB2 Quality Assurance Manager
Updated: December 2011
IBM DB2 pureScale with Q Replication
Page 2
Table of contents
Combining IBM DB2 pureScale™ with Q Replication for Scalability and Business
Continuity ......................................................................................................................... 1
Executive summary ......................................................................................................... 3
Introduction ...................................................................................................................... 5
Q Replication concepts and operation .......................................................................... 7
Special considerations for pureScale........................................................... 7
Configuration choices - First, what are your objectives? ......................... 8
The continuous availability configuration used for this paper............... 8
The components of Q Replication ............................................................. 10
The Q Replication process: How transactions are replicated ................ 11
How Q Replication leverages WebSphere MQ staging to provide
continuous operations ................................................................................. 12
Deploying Q Replication: What does it involve? ...................................................... 13
Q Replication pre-installation steps and capacity planning.................. 14
Q Replication installation and configuration........................................... 16
Replication subscriptions: Which tables do you want to replicate? ..... 22
Replication operations: Controlling the replication process ................. 24
Automating Q Replication failover using Tivoli Software Automation................ 30
Monitoring, tuning, and troubleshooting replication............................................... 31
Conclusion ...................................................................................................................... 34
Contributors.................................................................................................. 35
Appendix 1: Sample commands for preparing the databases for replication ...... 36
Appendix 2: Downloading and installing WebSphere MQ..................................... 39
Appendix 3: Testing WebSphere MQ connectivity................................................... 40
Appendix 4: Installing and configuring the Q Replication Dashboard ................. 41
Appendix 5: Automating Q Replication Failover using TSA .................................. 44
Notices ............................................................................................................................. 46
Trademarks ................................................................................................... 47
IBM DB2 pureScale with Q Replication
Page 3
Executive summary
Keeping up with today’s increasingly global and competitive marketplace requires a data
processing architecture that provides the flexibility to grow with future strategic
requirements and ensures business continuity throughout component outages,
maintenance activities, and catastrophic events.
For some enterprises, a single hour of downtime can translate to millions of dollars of
lost revenue, not to mention the damage to the company’s reputation and the potential
loss of customers. Global enterprises operate across time zones and offer business
services around the clock. Reserved offline windows for system maintenance and
upgrades no longer exist. Distributed enterprises need the ability to provide proximity of
service in each geographic location, coupled with the ability to circumvent network
failures or transmission times.
This paper outlines an architecture that addresses these availability requirements.
In December 2009, IBM introduced the DB2 pureScale Feature for Enterprise Server
Edition. The DB2 pureScale Feature builds on proven design features from the IBM DB2
for z/OS database software. The DB2 pureScale Feature is intended to meet the needs of
many customers by providing:
-
Virtually unlimited capacity: the ability to scale out your system by adding
additional machines to your cluster with ease.
-
Application transparency: the ability to leverage your existing applications without
changes.
-
Single site continuous availability: by providing an “all active” architecture with
inherent redundancy.
-
Reduced total cost of ownership (TCO): reducing the total cost of ownership by
allowing for a simplified deployment and management of advanced technology.
Since its first release in 2004, the Q Replication technology has provided low-latency,
high-volume replication for DB2. Q Replication complements the capabilities of
pureScale by providing:
-
Protection from disk and site failure by replicating the database to a remote site.
-
Continuous availability during upgrades and full site maintenance by allowing the
transfer of applications to another site until maintenance is completed, and the resynchronization of these sites afterward
-
Workload off-loading and live reporting by offloading reporting applications to
another database, thereby eliminating any possible contention with business critical
workloads, and allowing analytics and reporting on live data.
IBM DB2 pureScale with Q Replication
-
Page 4
Protection from data corruptions by maintaining a secondary copy of the database
where changes are delayed in relation to the primary, allowing for a quick recovery
from user or application errors.
By extending the DB2 pureScale Feature with Q Replication, you get scaling, reliability
and transparency along with the protection and continuous availability of off-site
replication. This paper discusses the considerations for this solution and describes how to
deploy Q Replication with the DB2 pureScale Feature.
IBM DB2 pureScale with Q Replication
Page 5
Introduction
The DB2 pureScale Feature leverages a shared-disk database implementation based on
the DB2 for z/OS data-sharing architecture. It brings proven technology from the DB2
database software on the mainframe to open systems. Using the DB2 pureScale Feature
offers the following key benefits:
•
Virtually unlimited capacity –The DB2 pureScale Feature provides practically
unlimited capacity by allowing for the addition and removal of DB2 members on
demand. The DB2 pureScale Feature can scale to 128 members and has a highly
efficient centralized management facility that allows for very efficient scale out
capabilities. The DB2 pureScale Feature also leverages a technology called
Remote Direct Memory Access (RDMA) which provides a highly efficient internode communication mechanism that also facilitates its scaling capabilities.
•
Application transparency – An application that utilizes a DB2 pureScale
database does not need to have any knowledge of the different members in the
cluster or to be concerned about partitioning data. The DB2 pureScale Feature
will automatically route applications to the members deemed most appropriate.
The DB2 pureScale Feature also provides native support for a great deal of
syntax used by other database vendors, allowing those applications to run in a
DB2 pureScale environment with minimal or no changes. The benefits of the DB2
pureScale Feature can be leveraged in many cases without modifying the
application.
•
Single site continuous availability – The DB2 pureScale Feature provides an
“active-active” configuration such that if one member goes down, processing can
continue at the remaining active members. During a member failure, in-process
transaction data on the failing member is temporarily unavailable until database
recovery completes, which typically is in the multiple seconds range.
•
Reduced TCO – The DB2 pureScale Feature can help reduce TCO through its
integrated and simplified deployment and maintenance capabilities. The DB2
pureScale interfaces handle the deployment and maintenance of components
integrated within the DB2 pureScale Feature.
The two links below point to existing papers on understanding and deploying the DB2
pureScale feature on AIX and Linux:
http://www.ibm.com/developerworks/data/library/techarticle/dm-1005purescalefeature/index.html
http://www.ibm.com/developerworks/data/library/techarticle/dm-1104purescale/index.html
IBM DB2 pureScale with Q Replication
Page 6
Since its initial release in 2004, Q Replication has provided high-performance replication
for DB2 on Linux, UNIX, and Windows and for DB2 on z/OS, including DB2 z/OS data
sharing, that is based on log-capture and transaction replay. Q Replication is capable of
replicating large volumes of changes for thousands of DB2 tables across thousands of
kilometers, often with sub-second latency. Q Replication leverages WebSphere MQ for
efficient data transmission and staging of the replicated changes. Q Replication
complements the capabilities of pureScale by providing:
1
-
Protection from disk or site failure - A DB2 pureScale instance provides virtually
unlimited scalability and protection from individual member outages, but
traditionally leverages a single copy of the data across disks configured with
Redundant Array of Independent Disks (RAID). Replicating the database to a remote
site provides additional protection from disk array failures and can also provide a
solution in the event of a site disaster. Using Q Replication, the distance between
sites can be unlimited, allowing for the recovery site to be as far away from the
primary site as necessary to avoid being subject to the same set of risks as the
primary business location 1 . As the replication process happens nearly in real-time
using Q Replication, database recovery can be nearly immediate, with a Recovery
Time Objective (RTO) of a few seconds.
-
Continuous availability during upgrades and maintenance - Planned maintenance
and migrations account for far more business interruptions than disasters. By
transferring applications to another site that is synchronized by using Q Replication,
you can achieve business continuity during system maintenance and updates.
Examples of upgrades include changes to hardware, operating systems, DB2
versions, and applications, and any data center maintenance that might require
system shutdown.
-
Workload off-loading and live reporting - In a DB2 pureScale instance, analytic and
reporting applications can create lock contention with online business transactions,
hampering database performance. By moving reporting applications to another
database, you can generate reports and perform analytics on live data without
affecting critical business applications. The Q Replication process is asynchronous
and does not impact application response times. Q Replication can replicate data
between highly dissimilar systems and even perform data transformations. For
example, the primary site might be a 16-member DB2 pureScale instance, with Q
Replication replicating a subset of the database in near real-time (a sub-second
average latency is often achievable) to another pureScale instance that has fewer
members, or to a DB2 ESE instance, or even to a non-DB2 database. Q Replication
provides the ability to replicate subsets of a database or even subsets of the database
transactions (for example, you can replicate only selected operations or exclude
transactions that were executed by a particular user).
-
Protection from data corruption with a time-delayed copy – For point-in-time
recovery, Q Replication can maintain a secondary copy of the database that is some
period of time behind the primary database. This secondary copy can be used to
This is a recommended practice for financial institutions, as suggested by the U.S. government in, “Interagency Paper on Sound
Practices to Strengthen the Resilience of the U.S. Financial System” [Docket No. R-1128] (April 7, 2003).
IBM DB2 pureScale with Q Replication
Page 7
recover from user or application errors on the primary database.
The secondary database can also be used for point-in-time query reporting. Keeping
a delayed copy of a database is easily achievable with Q Replication technology
because changes captured from the source database accumulate in a WebSphere MQ
receive queue at the target. The Q Apply program at the target system can be run with
the applydelay parameter 2 and will continuously apply changes a specified number
of seconds after they were committed at the source. Changes can also be applied in
batches with the applyupto parameter, for which the Q Apply program will apply
changes up to a pre-determined point in time, and then stop.
Staging the changes at the target in this manner provides protection from a disaster
with a potential recovery point objective (RPO) of sub-second, even if the data is not
applied immediately. Because the changes are captured and delivered to the target
queue in near real-time, the DB2 logs at the source can be archived normally; they do
not need to be held back for replication. Should a failover be required, the recovery
time (RTO) is the time it takes for replication to clear the backlog of changes that has
accumulated in the receive queue at the target. The Q Apply program is extremely
fast in clearing backlogs, often able to clear up millions of accumulated changes in
minutes, thanks to its parallel apply technology.
Q Replication concepts and operation
We now present and explain the considerations, configuration choices, concepts and
technology, as well as the deployment and operational procedures that are required for
achieving full continuous availability of your DB2 instance with Q Replication. This
paper specially covers pureScale, but all concepts and commands presented are equally
applicable to any other DB2 system.
Special considerations for pureScale
There are no major differences between using Q Replication with DB2 pureScale or with
any other DB2 offering, including DB2 ESE, InfoSphere Warehouse (DB2 with the
Database Partitioning Feature) and DB2 for z/OS. Product features, tools, administration,
operations, and use cases are nearly identical on all platforms. The specificities for DB2
pureScale, which at least conceptually are the same for DB2 z/OS data sharing, relate to:
2
•
Understanding the performance implications of running replication in a datasharing environment
•
Providing a shared disk for replication objects
•
Choosing a data member for running replication
•
Restarting the replication components on another member if the member where
they are running fails or is taken down, and optionally, defining resources for
automating the restart process.
The applydelay parameter is available with V9.8 Fix Pack 4 and V9.7 Fix Pack 5 or higher.
IBM DB2 pureScale with Q Replication
Page 8
This paper will address these specific questions, as well as all other considerations that
might be of a general nature for replication but are essential for understanding how to
achieve a multi-site continuous availability solution with DB2 pureScale.
Configuration choices - First, what are your objectives?
If your primary requirement for replicating data is live reporting, the target might be
DB2 ESE or even a non-DB2 system. In this case, the objective is to eliminate contention
at the source system by moving away some applications. For example, if you have a
reporting application that requires many index scans, the performance of your OLTP
workloads will suffer if you run the reporting at the source. To remedy this problem,
move the reporting application to another server. For live reporting, replication is usually
configured in one direction, from the source to the target system.
If you are replicating data for continuous availability, with the intent of running the
same application at either site, then you want to replicate all the data that is required for
running this application at either site. One common configuration is to have a large
primary system with a smaller secondary system that only protects the data that is
critical for business continuity. For continuous availability, replication is usually
configured to replicate data in both directions, even if the same data is not
simultaneously updated at either site.
With Q Replication, the source and target systems can be very dissimilar. They do not
have to run the same operating systems or DB2 versions. This flexibility enables you to
better use your available hardware, stage migrations, and deploy only the capacity
required at each site.
The continuous availability configuration used for this paper
The examples provided and concepts explained in this paper are derived from deploying
Q Replication between two pureScale systems in San Francisco, California and Toronto,
Canada. Throughout the text and appendices, we will provide a complete set of specific
instructions, all the way from installation to advanced operations that can be optionally
used for hands-on experimentation, as a way to assimilate the concepts explained. Unless
noted otherwise, all explanations and commands provided are applicable for any DB2
instance; it does not have to be pureScale.
Our replication setup is bidirectional. If you plan to use the second site only for read-only
applications, replication can be started in only one direction; then when the need arises to
failover the applications to Site 2, replication is started in the other direction before
failover.
Both DB2 databases are fully “active,” meaning that they can accept any SQL workload.
However, with multidirectional replication, applications should be balanced between the
sites in a manner that avoids conflicts. If conflicts occur, Q Replication will detect and
report conflicting SQL statements in its IBMQREP_EXCEPTIONS table, and resolve the
conflict according to its CONFLICT_ACTION parameter. In this tutorial, we set
CONFLICT_ACTION to F (force) at one site and I (Ignore) at the other site, establishing a
IBM DB2 pureScale with Q Replication
Page 9
master/subordinate relationship between the sites. A setting of F (force) means that an
update that is replicated for a row that was deleted at the target by another application
will be transformed into an insert statement.
The figure below illustrates the configuration that is used for the examples provided
throughout this paper. Site 1 is a DB2 pureScale instance located in San Francisco and
composed of four systems, with IP hostnames of coralx501 through coralx504. Site 2 is
another DB2 pureScale instance located some 3000 miles away in Toronto. It is composed
of two systems with the IP hostnames coralx506 and coralx507. The physical database is
called QSAMPLE at both sites. Each site is fully active; DB2 is started at each site and able
to execute DB2 workloads. Shared disk is used at each location so that DB2 and
WebSphere MQ data can be accessed from any member of each DB2 pureScale instance.
Figure 1 - DB2 pureScale and Q Replication topology used for this paper.
We chose a bidirectional configuration for meeting both live reporting and continuous
availability requirements. During normal operations, the smaller site is used mostly for
live reporting, but in the event of a disaster or planned maintenance, business
applications are failed over – explicitly rerouted – to the second site, and the live
reporting applications are suspended until the primary site can be recovered. Once the
failed site returns, the data is re-synchronized automatically by restarting replication to
deliver the changes that took place on the secondary site while the primary site was
down.
IBM DB2 pureScale with Q Replication
Page 10
The components of Q Replication
The components of a bidirectional Q Replication configuration include:
• One Q Capture and one Q Apply program for each direction
• One WebSphere MQ queue manager per system (when using a local server
connection, which we are using). WebSphere MQ queues are created on shared
disk for staging and transmitting the replicated changes (we recommend that this
shared disk is separate from the shared disk DB2 pureScale uses).
The components of Q Replication are illustrated in Figure 2. The Q Capture and Q Apply
programs must run on the same member as the WebSphere MQ queue manager. This set
of programs can run on any pureScale member; they can even run on a remote server,
but in this paper we configured them to run within each cluster for optimal performance.
We leverage the WebSphere MQ V7 option for specifying a shared disk location for the
queue data and then define each queue manager so that it can be started from another
member of the instance, when the member where it runs goes down.
Figure 2 - The Q Replication components
IBM DB2 pureScale with Q Replication
Page 11
The Q Replication process: How transactions are replicated
Q Replication uses log-capture/transaction-replay technology; changes are captured from
the DB2 recovery log, transmitted over the network via WebSphere MQ, and then reexecuted at the target as DB2 SQL transactions.
Figure 3 - The Q Replication process
With Q Replication, changes are transported over the network by the queue manager via
a transmission queue (XMITQ) where data is staged until it is transmitted. One
WebSphere MQ message is transmitted for each DB2 transaction that is captured from
the log. Each message is very compact, containing only data values and very little header
information. If the network or the target system is down, changes can still be captured
and staged in the transmission queue at the source system. Once delivered, the
transactions are staged in the receive queue from which they are applied to the target
database in parallel by a pool of parallel Q Apply agents. Transactional integrity is
preserved. If either the target database or Q Apply is down, changes can be staged at the
target in the receive queue until both Q Apply and DB2 are available.
IBM DB2 pureScale with Q Replication
Page 12
How Q Replication leverages WebSphere MQ staging to
provide continuous operations
Q Replication enables your enterprise to keep your applications online during
maintenance, upgrades, or modifications to the replication configuration. The affected
applications can be switched over to the other site until the maintenance or upgrade has
completed.
With enough staging capacity in the receive queue, it is sometimes possible to perform
major upgrades over several days, even weeks, allowing generous time for testing,
without any downtime. Once the new system is ready to operate, you can simply restart
Q Apply and clear the backlog of changes that have accumulated during the outage. This
storage is persistent and will guarantee integrity even after system or software crashes.
The ability to stage changes in the receive queue at the target even if DB2 or Q Apply is
taken down is particularly useful for disaster recovery. Because changes are safely
moved away from the source system, in near real-time, they are available should a
disaster occur on the source system, and can be applied before failover to the second site.
The database administrator can independently start, stop, or resume the capture or apply
process for a queue (and therefore all tables replicated on that queue), allowing changes
to accumulate in the WebSphere MQ queues until the target is ready for updates.
You can also temporarily exclude an individual table from replication, for example to
perform an offline REORG of a table, and then simply resume replication from where it
was suspended for that table. While replication is suspended for this table, changes to the
table are accumulated in a temporary but persistent WebSphere MQ queue that is
transparently created by the Q Apply program.
It is also possible, at any time, to add any number of tables to an existing replication
configuration without ever needing to suspend the replication process for existing tables.
During the time that any table added to the replication configuration is being loaded at
the target, replication continues unaffected for all tables already replicated.
The Q Replication protocol relies on messages exchanged over WebSphere MQ and
control data that is stored in DB2 tables (without ever needing 2-phase commit between
WebSphere MQ and DB2). This allows complete independence between the Q Capture
and Q Apply programs. Administrative tasks, including configuration changes, can be
performed even if one of the programs has been stopped.
The figure below illustrates how a single table is excluded from a replication
configuration when a table is not ready for use at the target. The target table might be in
the process of being loaded, or a database administrator might have made this table
unavailable in order to perform maintenance activities such as rebuilding indexes. The
DBA purposely stopped the applying of changes to the table by issuing a spillsub
command. A DBA can thus experiment with modifications for performance optimization,
testing on one site while production is running at the other site.
IBM DB2 pureScale with Q Replication
Page 13
Figure 4 - Continuous operations while a table is loaded or taken offline at the target
Changes that are replicated for a table that is not available at the target are stored in a
spill queue that is created automatically by the Q Apply program. This spilling method is
used either during the initial load of a table at the target, which can be done
automatically by the Q Apply program, or when explicitly requested by the
administrator via a spillsub command. While spilling is taking place, the applications can
keep running at the source site, and replication for all unaffected tables continues
unaffected. If a replicated transaction changes other tables along with the table in spilling
mode, the rest of the transaction is applied and committed. The backlog of changes in a
spilled queue is applied when the target table is ready, either when the load is done or
the subscription resumed, and the spill queue is automatically removed by the Q Apply
program after the backlog is cleared.
Deploying Q Replication: What does it involve?
The following sections will guide you through the necessary decisions and actions
needed to deploy Q Replication, from planning to system setup and production
operations. The activities required are:
1.
Pre-installation verification and capacity planning, including enabling DB2 for
replication
IBM DB2 pureScale with Q Replication
Page 14
2.
Installation and configuration of WebSphere MQ and the Q Replication license
3.
Definitions of replication subscriptions; that is, what tables do you want to
replicate?
4.
Operations – starting and stopping replication, handling an outage
5.
Monitoring, tuning, and troubleshooting to get the maximum return from the
solution.
Q Replication pre-installation steps and capacity planning
Q Replication is a log-capture/transaction-replay technology; that is, the data changes are
captured from the DB2 recovery log and SQL statements are reconstructed for replay at
the target.
At the target database, for the purpose of DB2 capacity planning and tuning, the DB2
instance must therefore have sufficient capacity for handling the actual SQL workload
that is replicated. In general, any performance tuning that was required for the
applications at the source system is also applicable at the target system.
At the source database, the main Q Capture activity against DB2 is reading the log, for
which generally no DB2 tuning is required. But, you might want to consult Database
parameter settings for Linux, UNIX, and Windows in the DB2 Information Center for
fine-tuning recommendations, particularly if the volume of transactions to replicate is
exceptionally large.
CPU requirements
The Q Capture and Q Apply replication programs, along with WebSphere MQ, add a
small overhead to the replicated DB2 workload.
At the target, as a rule-of-thumb, Q Apply and WebSphere MQ combined add
approximately 20 percent to 25 percent overhead to the CPU load needed to apply the
DB2 changes from the source to the target database. That is, approximately 75 percent of
the CPU that is needed for applying changes at the target is spent in DB2, which is
equivalent to the CPU that was required at the source system for executing those same
statements, and approximately 25 percent of the required CPU is spent in Q Apply and
WebSphere MQ combined. The CPU overhead from Q Apply and WebSphere MQ is
only on the member where these programs run; the CPU overhead introduced by
replication on the other members is generally negligible. For very high volumes of
replicated changes coming from systems with a large number of members, it might
become beneficial to dedicate a member to running the Q Apply program at the target.
At the source, in very high-volume performance experiments where each of the four
members was running at 50 percent of its CPU capacity, the Q Capture program added
approximately 10 percent CPU overhead to the member where it ran for capturing
changes from all other members. The overhead on other members for capturing log
records is negligible.
IBM DB2 pureScale with Q Replication
Page 15
In general, CPU requirements vary widely for different environments and workloads,
and it is recommended to test with your application.
Disk space requirements
Disk space is required for staging changes in the WebSphere MQ receive queue at the
target. Minimally, this space only needs to be large enough for handling the largest
possible DB2 transaction that might be replicated, but the size should be large enough to
store the amount of changes expected to accumulate during an outage. For example, if
you are replicating 1000 rows per second, with each row averaging 200 bytes, and want
to be able to hold up to 24 hours of accumulated changes in case the target database is
taken down, then you might allocate 1000 rows * 200 bytes/row * 3600sec/hour * 24 =
17.3GB of space to the receive queue at the target system. There is a minimal overhead
for the WebSphere MQ message header, generally a few hundred bytes per replicated
DB2 transaction, which you can use to round up your estimate. However, if the receive
queue fills up, this is not troublesome. When replication runs out of space, either for the
source or at the target queues, the Q Capture program either stops, or goes in retry mode
as per the qfull_retry_delay and qfull_num_retries parameters.
Disk space for the transmission queue only needs to be large enough for the largest
database transaction that can be replicated.
Running out of space is not catastrophic. Q Replication writes the information about its
progress in reading the DB2 logs and about the WebSphere MQ messages it has
processed to persistent storage, so any component of the replication process can be
interrupted – even abruptly – at any time, without loss of data.
In this paper, we use the same file system for all DB2 and WebSphere MQ data, but for
optimal performance, it is recommended to use separate disks and file systems for DB2
logs, DB2 data, WebSphere MQ logs, and WebSphere MQ data respectively. The space
required for WebSphere MQ logs at the source will be proportional to the amount of
messages that might accumulate in the XMITQ; at the target, it is proportional to the
number of messages that can be applied concurrently by Q Apply. In both case, it is
much smaller than the space required for the WebSphere MQ data. In general, a couple
hundred MB are sufficient for WebSphere MQ log space, unless you are replicating
single DB2 transactions that are individually larger than this amount.
Preparing the databases for replication
Preparing a database to use replication requires:
1. Cataloging each DB2 database at the other site so that the Q Apply program can
connect remotely to it, if needed, such as during initial table loads. We catalog the
database through aliases as QSAMPLE1 at Site 1 and QSAMPLE2 at Site 2.
2. Setting LOGARCHMETH1=LOGRETAIN on the database. Circular logging cannot be
used because a log file that is still needed for replication could be reused.
3. Altering the tables that you want to replicate to enable DATA CAPTURE CHANGES.
IBM DB2 pureScale with Q Replication
Page 16
See Appendix 1: “Preparing the database for replication” for the sample commands that
are used in this step.
Q Replication installation and configuration
Installing and configuring Q Replication involves these steps:
1. Download and install WebSphere MQ
2. Create the WebSphere MQ objects.
3. Obtain a Q Replication license, if needed.
4. Initialize the shell environment
5. Create the control tables for Q Replication
1. Download and install WebSphere MQ
See Appendix 2 for commands and instructions to download and install WebSphere MQ.
2. Create the WebSphere MQ objects
We need to create the queue managers, WebSphere MQ queues, channels, and listeners.
For this task you need to know the IP hostname of each data member where WebSphere
MQ runs and the port number that is used by WebSphere MQ (port 1414 by default). You
also need the name of the shared file systems for WebSphere MQ, /db2fs/data at Site 1
and /db2fs/data2 at Site 2 in our case.
We will create the queue managers with the same names as the database aliases, namely,
QSAMPLE1 for Site 1, and QSAMPLE2 for Site 2.
We provide the steps for one pureScale instance; repeat at the other site.
Creating a queue manager
On Site 1 in San Francisco, as the DB2 instance owner, create a queue manager named
QSAMPLE1 by running the following command:
crtmqm -ld /db2fs/data/HA/logs -md /db2fs/data/HA/qmgrs -lp 50 ls 10 -q QSAMPLE1
This creates the QSAMPLE1 queue manager, with its log files and the queue data on
shared disks specified by the -ld and -md options. The -lp and -ls options increase the
number of primary and secondary 4MB log files, which is recommended if very large
DB2 transactions are to be replicated. For more information on WebSphere MQ
configuration and performance considerations, see Required settings for WebSphere MQ
objects and Tuning WebSphere MQ for replication performance in the DB2 Information
Center.
IBM DB2 pureScale with Q Replication
Page 17
Copying the queue manager information to other members for HA
You need to copy the queue manager information to all members where it could failover.
The dspmqinfo (display WebSphere MQ configuration information) command generates
a WebSphere MQ addmqinf (add WebSphere MQ configuration information) command
to copy the queue manager information from one member to another:
dspmqinf -o command QSAMPLE1
Run the generated addmqinf command on each other member where WebSphere MQ
could failover. Generally, one or two alternate members are sufficient
rlogin coralx502
addmqinf -s QueueManager -v Name=QSAMPLE1 -v Directory=QSAMPLE1 v Prefix=/var/mqm -v DataPath=/db2fs/data/HA/qmgrs/QSAMPLE1
exit
The WebSphere MQ queue manager configuration is now completed for Site 1 in San
Francisco. We repeat for site 2. As the DB2 instance owner, create a queue manager
named QSAMPLE2, taking care to specify the correct shared disk names for this site
(from any member):
rlogin coralx507
crtmqm -ld /db2fs/data2/HA/logs -md /db2fs/data2/HA/qmgrs -lp 50
-ls 10 -q QSAMPLE2
And generate an admqinf command by running dspmqinf on each member where
WebSphere MQ could failover.
Create the WebSphere MQ channels, queues, and listeners
To complete this task, you need the TCP/IP address and a port for each site.
We use the Replication Center to generate the WebSphere MQ commands. Launch the
Replication Center from the Windows start menu shortcuts or from the command line:
db2rc
If this is the first time you have launched the Replication Center, you will be presented
with the Replication Center Launchpad.
IBM DB2 pureScale with Q Replication
Page 18
Figure 5. MQ Script Generator button on Replication Center Launchpad
If you are starting from the Replication Center main window, click the Script Generator
icon on the toolbar, or click Tools > MQ Script Generator for Q Replication from the
Tools menu.
Figure 6. MQ Script Generator icon on Replication Center toolbar
Choose Bidirectional or peer-to-peer with smart defaults from the configuration dropdown box.
Figure 7. Select a configuration for the MQ Script Generator
IBM DB2 pureScale with Q Replication
Page 19
Provide the port and IP address of any member where WebSphere MQ is configured to
run and the database alias name for each site. As illustrated in Figure 8, we run
WebSphere MQ on member 0 at Site 1 (coraslx501) and on member 0 at Site 2 (coralx506).
The database aliases are QSAMPLE1 for Site 1 and QSAMPLE2 for Site 2. We use port
1414 (default port for WebSphere MQ) at each site.
Figure 8. Replication Center MQ Script Generator with the IP Addresses, port and DB2 aliases
After you fill in the fields, the Script Generator generates the commands to create all the
WebSphere MQ objects at both the source (Site 1 San Francisco) and target (Site 2
Toronto) locations. Scroll to pages 2 and 3 of the Script Generator to see the commands.
You can cut and paste the commands into a script file that you execute with the
WebSphere MQ runmqsc command.
Below are the commands that were generated for Site 1. Cut and paste the code into a
script file called server1.txt. We need to slightly modify this script for our environment
because the queue manager can run on another pureScale member for failover.
ALTER QMGR MAXMSGL(4914304)
DEFINE QLOCAL('ASN.QSAMPLE1.RESTARTQ') PUT(ENABLED) GET(ENABLED)
DEFINE QLOCAL('ASN.QSAMPLE1.ADMINQ') PUT(ENABLED) GET(ENABLED)
SHARE
DEFINE QLOCAL('ASN.QSAMPLE2_TO_ASN.QSAMPLE1.DATA') GET(ENABLED)
PUT(ENABLED) DEFSOPT(SHARED) MAXDEPTH(500000) MAXMSGL(4914304)
DEFINE QREMOTE('ASN.QSAMPLE2.ADMINQ')
RNAME('ASN.QSAMPLE2.ADMINQ') RQMNAME('QSAMPLE2')
XMITQ('QSAMPLE2') PUT(ENABLED)
DEFINE QLOCAL('QSAMPLE2') USAGE(XMITQ) MAXDEPTH(500000)
MAXMSGL(4914304)
DEFINE QREMOTE('ASN.QSAMPLE1_TO_ASN.QSAMPLE2.DATA')
RNAME('ASN.QSAMPLE1_TO_ASN.QSAMPLE2.DATA') RQMNAME('QSAMPLE2')
XMITQ('QSAMPLE2') PUT(ENABLED)
DEFINE CHL ('QSAMPLE1_TO_QSAMPLE2') CHLTYPE(SDR) TRPTYPE(TCP)
CONNAME('coralx506.torolab.ibm.com(1414)') XMITQ('QSAMPLE2')
DISCINT (0) CONVERT(NO) MAXMSGL(4914304)
DEFINE CHL ('QSAMPLE2_TO_QSAMPLE1') CHLTYPE(RCVR) TRPTYPE(TCP)
MAXMSGL(4914304)
IBM DB2 pureScale with Q Replication
Page 20
DEFINE QMODEL('IBMQREP.SPILL.MODELQ') DEFSOPT(SHARED)
MAXDEPTH(500000) MSGDLVSQ(FIFO) DEFTYPE(PERMDYN) MAXMSGL(4914304)
START CHL ('QSAMPLE1_TO_QSAMPLE2')
DEFINE LISTENER ('REPL_LSTR') TRPTYPE(TCP) PORT(1414)
CONTROL(QMGR)
START LISTENER ('REPL_LSTR')
Change the CONNAME parameter of the DEFINE CHL channel definition command for
the remote queue 'QSAMPLE1_TO_QSAMPLE2', adding the TCP/IP addresses of the
alternate systems where the target queue manager might failover. This will allow
WebSphere MQ to transparently retry the connection when the primary target queue
manager becomes unavailable. The target hostnames will be retried in the order they are
specified. We add the hostname coralx507.torolab.ibm.com(1414) to the existing one,
separated by a comma:
DEFINE CHL('QSAMPLE1_TO_QSAMPLE2') CHLTYPE(SDR) TRPTYPE(TCP)
CONNAME('coralx506.torolab.ibm.com(1414),coralx507.torolab.ibm.co
m(1414)') XMITQ('SAMPLE2') DISCINT (0) CONVERT(NO)
MAXMSGL(4914304)
Start the queue manager QSAMPLE1 and run the script server1.txt to create the objects:
strmqm –x QSAMPLE1
runmqsc QSAMPLE1 < filepath/server1.txt
The queue manager is started with the -x option, which will prevent the possibility of
two queue managers running as primary and simultaneously accessing the shared
WebSphere MQ data.
We now repeat the steps for Site 2, changing the DEFINE CHL connection attribute to
add the alternate hostname and port of Site 1, 'coralx502.torolab.ibm.com(1414)':
DEFINE CHL('QSAMPLE2_TO_QSAMPLE1') CHLTYPE(SDR) TRPTYPE(TCP)
CONNAME('coralx501.torolab.ibm.com(1414),coralx502.torolab.ibm.co
m(1414)')
XMITQ('SAMPLE1') DISCINT (0) CONVERT(NO) MAXMSGL(4914304)
3. Obtain and install the license for Q Replication, if needed
Q Replication is part of DB2, and therefore does not need to be installed. If you have DB2
pureScale with the Advanced Enterprise Server Edition (AESE), the license is included.
Otherwise, contact your IBM sales representative to obtain a license. Use the db2licm
command to install the license.
4. Initialize the shell environment for replication tools
Define or update the following environment variables for replication tools. Add all the
environment variables in the .profile file of the instance owner so that they are set at
login time. Replace username with your actual user name.
IBM DB2 pureScale with Q Replication
Page 21
export PATH=/home/username/sqllib/java/jdk64/jre/bin:
:/home/username/sqllib/bin:/opt/mqm/samp/bin:$PATH
export
LD_LIBRARY_PATH=/home/username/sqllib/lib:/home/username/sqllib/j
ava/jdk64/lib:/home/username/sqllib/java:/opt/mqm/lib64:$LD_LIBRA
RY_PATH
export DB2LIBPATH=/opt/mqm/lib64:$DB2LIBPATH
export CLASSPATH=/home/username/sqllib/tools:$CLASSPATH
Log out and login again as the instance owner and check whether the variables are set
correctly.
5. Create the Q Replication control tables
We create the control tables using an ASNCLP program script. To perform this task, you
need to know the names of the WebSphere MQ queues that are used for Q Replication,
because the create control table command stores those names in the tables.
Add the following commands to a text file called crtCtlTables.in (change the ID and
password as required for your systems):
ASNCLP SESSION SET TO Q REPLICATION;
SET RUN SCRIPT NOW STOP ON SQL ERROR ON;
# Create control tables for Capture and Apply at SITE 1 SAN
FRANCISCO
SET SERVER CAPTURE TO DBALIAS QSAMPLE1 ID useridsite1 PASSWORD
"mypassw";
SET SERVER TARGET TO DBALIAS QSAMPLE2 ID useridsite2 PASSWORD
"mypassw";
SET QMANAGER "QSAMPLE1" for CAPTURE SCHEMA;
SET QMANAGER "QSAMPLE2" for APPLY SCHEMA;
CREATE CONTROL TABLES FOR CAPTURE SERVER USING RESTARTQ
"ASN.QSAMPLE1.RESTARTQ" ADMINQ "ASN.QSAMPLE1.ADMINQ";
CREATE CONTROL TABLES FOR APPLY SERVER;
# Create control tables for Capture and Apply at SITE 2
SET QMANAGER "QSAMPLE2" FOR CAPTURE SCHEMA;
SET QMANAGER "QSAMPLE1" FOR APPLY SCHEMA;
SET SERVER CAPTURE TO DBALIAS QSAMPLE2 ID useridsite2 PASSWORD
"mypassw";
SET SERVER TARGET TO DBALIAS QSAMPLE1 ID useridsite1 PASSWORD
"mypassw";
CREATE CONTROL TABLES FOR CAPTURE SERVER USING RESTARTQ
"ASN.QSAMPLE2.RESTARTQ" ADMINQ "ASN.QSAMPLE2.ADMINQ";
CREATE CONTROL TABLES FOR APPLY SERVER;
The crtCtlTables.in script can be executed from either site, and will create the tables for
both sites
IBM DB2 pureScale with Q Replication
Page 22
asnclp –f crtCtlTables.in
Create a replication queue map for each direction
A replication queue map identifies the queues that are needed for replicating data in one
direction. It groups a send queue at the source with its associated receive queue at the
target, along with the administration queue at the target that is used by Q Apply for
sending control messages back to the Q Capture at the source. Please, take note! The
receiving administration queue on the Q Capture side is not part of the queue map, so be
careful to specify the target adminq in the queue map definition, and not the source
adminq. Specifying the wrong admin queue name is a common mistake.
Copy the following commands into a text file called crtQMap.in (change the ID and
password as required for your systems):
ASNCLP SESSION SET TO Q REPLICATION;
SET RUN SCRIPT NOW STOP ON SQL ERROR ON;
# Create Q Map FROM SITE 1 SAN FRANCISCO TO SITE 2
SET SERVER CAPTURE TO DBALIAS QSAMPLE1 ID useridsite1 PASSWORD
"mypassw";
SET SERVER TARGET TO DBALIAS QSAMPLE2 ID useridsite2 PASSWORD
"mypassw";
CREATE REPLQMAP QSAMPLE1_ASN_TO_QSAMPLE2_ASN USING ADMINQ
"ASN.QSAMPLE1.ADMINQ" RECVQ "ASN.QSAMPLE1_
TO_ASN.QSAMPLE2.DATA" SENDQ "ASN.QSAMPLE1_TO_ASN.SAMPLE2.DATA";
# Create Q Map FROM SITE 2 TO SITE 1 SAN FRANCISCO
SET SERVER CAPTURE TO DBALIAS QSAMPLE2 ID useridsite2 PASSWORD
"mypassw";
SET SERVER TARGET TO DBALIAS QSAMPLE1 ID useridsite1 PASSWORD
"mypassw";
CREATE REPLQMAP QSAMPLE2_ASN_TO_QSAMPLE1_ASN USING ADMINQ
"ASN.QSAMPLE2.ADMINQ" REC
VQ "ASN.QSAMPLE2_TO_ASN.QSAMPLE1.DATA" SENDQ
"ASN.QSAMPLE2_TO_ASN.QSAMPLE1.DATA";
Run the script that is saved in the file crtQMap.in with the following command from
either site; both sites will be updated:
asnclp –f crtQMap.in
To test connectivity and verify that your queues were set up properly, see “Appendix 3:
Testing WebSphere MQ connectivity.”
Replication subscriptions: Which tables do you want to
replicate?
The installation and configuration are now complete, and we are ready to define tables
and transactions to replicate. You can create replication subscriptions with either the
Replication Center or the replication command-line processor, ASNCLP. The Replication
IBM DB2 pureScale with Q Replication
Page 23
Center is a great learning tool, with wizards that guide you through the tasks. However,
once accustomed to the principles of replication, scripting with the ASNCLP language is
often preferred, because scripts can be easily customized and re-used for rapidly
deploying new configurations.
Q Replication provides great flexibility in selecting only a subset of the database objects
and transactions to replicate, but it is also very easy to subscribe to all tables for a given
database schema at once. The following ASNCLP commands create bidirectional Q
subscriptions for all tables in our database. In our test we only have two tables, but it can
be thousands. Copy and paste these commands into a text file called crtSubs.in (change
the ID and password as required for your systems):
# Subscribe to tables under schema USERIDSITE1 for bi-directional
replication between 2 sites
ASNCLP SESSION SET TO Q REPLICATION;
SET RUN SCRIPT NOW STOP ON SQL ERROR ON;
# Identify the databases in the configuration: QSAMPLE1 and
QSAMPLE2
SET BIDI NODE 1 SERVER DBALIAS QSAMPLE1 ID useridsite1 PASSWORD
"mypassw" SCHEMA ASN;
SET BIDI NODE 2 SERVER DBALIAS QSAMPLE2 ID useridsite2 PASSWORD
"mypassw" SCHEMA ASN;
# Identify the tables to replicate
SET TABLES (NODE1 SRC OWNER LIKE “useridsite1%”);
# Create bi-directional subscriptions for all the tables
CREATE QSUB SUBTYPE B
FROM NODE QSAMPLE1.ASN SOURCE ALL CHANGED ROWS Y HAS LOAD PHASE I
START AUTOMATICALLY YES TARGET CONFLICT RULE C CONFLICT ACTION F
FROM NODE QSAMPLE2.ASN SOURCE ALL CHANGED ROWS N HAS LOAD PHASE I
TARGET CONFLICT RULE C CONFLICT ACTION I;
The CREATE QSUB command will:
•
•
•
•
Create the tables at the target if they don’t already exist
Define the subscriptions so they are activated automatically when the Q Capture
program is started (START AUTOMATICALLY YES)
Load the target tables automatically (HAS LOAD PHASE I) when their
subscriptions are activated
Check for conflicts by comparing changed values only (CONFLICT RULE C ) and
make Site 1 the winning site for conflicts by forcing conflicting changes coming
from Site 1 (FROM NODE QSAMPLE1.ASN CONFLICT_ACTION F) and by ignoring
conflicting changes coming from Site 2 (FROM NODE QSAMPLE2.ASN
CONFLICT_ACTION I).
See the CREATE QSUB command (bidirectional replication) in the DB2 Information
Center for more information.
IBM DB2 pureScale with Q Replication
Page 24
You can run ASNCLP commands from any site. The ASNCLP program connects to each
database to set up the configuration:
asnclp –f crtQSubs.in
Tips for the initial load of extremely large databases
If you have a database with several thousands of very large tables (e.g., terabytes) and
very high transaction rates (e.g., hundreds of million of database changes a day, or more),
consider the following alternatives to the procedure illustrated above:
1.
2.
Starting Subscriptions: Minimize the amount of resources required for staging live
changes while the load is taking place by defining the subscriptions with START
AUTOMATICALLY NO. You then activate the subscriptions with a START
command. It is advised to start the subscriptions for the largest tables first, and wait
until they are loaded, before starting the next group of subscriptions. For example,
start the terabyte-sized tables first; after they are loaded, the smaller tables can be
started several hundreds or even thousands at a time. The resources needed are for
spilling changes at the target until the load is completed. If the table is so big that it
takes 8 hours to load across the wide-area network, then Q Replication needs enough
spilling space to hold all the changes that took place during these 8 hours. For a very
high-volume system, this requires very significant resources if you do it for all tables
at once.
Loading Tables: Depending on your installation, it might be more efficient to load the
tables outside of replication using a LOAD utility or even the entire database using
backup and restore. In this case, you would define the replication subscription with
an external load type, or without a load phase, respectively. See Loading target tables
for Q Replication in the DB2 Information Center for explanations.
Replication operations: Controlling the replication process
We are now ready to replicate data. The queue manager, Q Capture, and the Q Apply
programs generally run continuously. They can be started in any order. Q Capture is
started with the asnqcap command, Q Apply with asnqapp, and the queue manager
with strmqm.
Member 0 on Site 1 (coralx501):
strmqm –x QSAMPLE1
asnqcap capture_server=QSAMPLE1 startmode=WARMSI&
asnqapp apply_server=QSAMPLE1 &
Member 0 on Site 2 (coralx506):
strmqm –x QSAMPLE2
asnqcap capture_server=QSAMPLE2 startmode=WARMSI&
asnqapp apply_server=QSAMPLE2 &
Q Capture command options include:
IBM DB2 pureScale with Q Replication
Page 25
capture_server: The source database alias
warmsi: Q Capture will restart reading the DB2 logs from where it was last
stopped.
Q Apply command options include:
apply_server: The target database alias
For more information on the Q Capture and Q Apply server options, see Operating a Q
Capture program and Operating a Q Apply program in the DB2 Information Center.
Our subscriptions were created with START AUTOMATICALLY YES; therefore, starting
the Q Capture program automatically activates them. If you have created and populated
the tables as described in “Appendix 1: Sample commands for preparing the databases
for replication,” you can now verify that the data from the source database was loaded at
the target:
db2 connect to QSAMPLE2
db2 select * from ORDER;
The output of the SELECT statement at the target site is shown in Listing 1.
Listing 1: Output of SELECT * from ORDER in QSAMPLE2
ID
-----1
2
3
4
5
6
7
8
9
10
11
DESCRIPTION
QUANTITY
--------------- ------Snow Shovel
1
Snow Shovel
2
Snow Shovel
3
Snow Shovel
4
Snow Shovel
5
Snow Shovel
6
Snow Shovel
7
Snow Shovel
8
Snow Shovel
9
Snow Shovel
10
New snow shovel
11
11 record(s) selected.
If you change any data in the source or target table, those changes are automatically
replicated.
# Testing replication from San Francisco to Toronto
db2 connect to QSAMPLE1 -- source system Site 1 San Francisco
db2 update ORDER set quantity=500 where id=’5’;
db2 connect to QSAMPLE2 -- target system Site 2 Toronto
db2 select * from ORDER where id=5
IBM DB2 pureScale with Q Replication
Page 26
Listing 2. Output from QSAMPLE2 when an update is done on QSAMPLE1
ID
DESCRIPTION
-----
--------------
5
Snow Shovel
QUANTITY
------500
1 record(s) selected.
Now, you can test changes replicated in the other direction:
db2
db2
db2
db2
connect to QSAMPLE2 -- Site 2 in Toronto
insert into ORDER values (12, ‘Big Shovel', 20)
connect to QSAMPLE1 -- Site 1 San Francisco
select * from ORDER where id =12;
Listing 3. Output from QSAMPLE1 when an update is done on QSAMPLE2
ID
DESCRIPTION
QUANTITY
-----
-------------
-----------
12
Big Shovel
20
1 record(s) selected.
Now that your replication configuration is running, you can experiment with some of the
continuous availability features that Q Replication technology makes possible. In
particular, Q Replication effectively leverages the possibility of staging changes into
WebSphere MQ queues to provide uninterrupted operations.
Taking down a Site for Maintenance
When bringing down a site for planned maintenance, you stop the replication process
after all changes have been captured and successfully applied to the other site, and before
rerouting the applications to the other site. You first stop the applications and then the Q
Capture program after all changes have been applied at the target, which can be verified
by checking that the QDEPTH at the target is 0 from the Q Replication dashboard.
Stopping and restarting a queue at the target during upgrades
When doing maintenance or migration on the target database at Site 2, you can let the
application run at Site 1 and allow the changes to accumulate in the receive queue of Site
2 until it is upgraded. Issue the following command to stop Q Apply from processing the
messages from the receive queue at Site 2:
asnqacmd apply_server=QSAMPLE2
stopq=ASN.SAMPLE1_TO_ASN.SAMPLE2.DATA
After you stop message processing on the queue, you can upgrade the application and
then restart message processing:
asnqacmd apply_server=QSAMPLE2
IBM DB2 pureScale with Q Replication
Page 27
startq=ASN.SAMPLE1_TO_ASN.SAMPLE2.DATA
You could also stop the Q Apply program altogether by running the following
command, if all tables are replicated in a single queue:
asnqacmd apply_server=QSAMPLE2 STOP
Adding tables to an existing replication configuration
Tables can be added to an existing replication configuration at any time without stopping
the replication programs or suspending the applications that use those tables by adding
new subscriptions. To illustrate the process, we create a new table 'T2' at site 1 and
immediately start using it by inserting a row: :
db2 connect to QSAMPLE1;
db2 create table T2 (tid integer not null primary key)
db2 insert into T2(tid) values(1);
We create a subscription for T2, which will create T2 at site 2 load it, and start replicating
changes to it. Copy the following commands into a script file called crtNewSub.in
(update the ID and password for your databases and the schema name) and run the
script using: asnclp –f crtNewSub.in.
# Script for adding tables to a bi-directional configuration
ASNCLP SESSION SET TO Q REPLICATION;
SET RUN SCRIPT NOW STOP ON SQL ERROR ON;
# Identify the databases in the configuration
SET BIDI NODE 1 SERVER DBALIAS QSAMPLE1 ID useridsite1 PASSWORD
"mypassw" SCHEMA ASN;
SET BIDI NODE 2 SERVER DBALIAS QSAMPLE2 ID useridsite2 PASSWORD
"mypassw" SCHEMA ASN;
# Identify the tables to replicate
SET TABLES
(SAMPLE1.ASN.USERIDSITE1.T2,SAMPLE2.ASN.USERIDSITE2.T2);
# Create the subscriptions
CREATE QSUB SUBTYPE B
FROM NODE QSAMPLE1.ASN SOURCE ALL CHANGED ROWS Y HAS LOAD PHASE I
START AUTOMATICALLY NO TARGET CONFLICT RULE C CONFLICT ACTION F
FROM NODE QSAMPLE2.ASN SOURCE ALL CHANGED ROWS N HAS LOAD PHASE I
TARGET CONFLICT RULE C CONFLICT ACTION I;
# Start the subscriptions
START QSUB SUBNAME "T20001";
Note that the ASNCLP program generates a subscription name automatically by
appending a sequence number to the table name because it is possible to have several
subscriptions for the same table to different targets. In the example above, the
subscription was created as 'T20001.' You can view and also start the subscriptions from
the Q Replication dashboard. See,”Appendix 4: Installing and configuring the Q
Replication Dashboard.”
IBM DB2 pureScale with Q Replication
Page 28
Adding a column to a table that is already being replicated
You tell Q replication to include the new column into the subscription by inserting a
signal row into the Q Capture IBMQREP_SIGNAL table. Q Replication will automatically
alter the target table to add the column if it was not already added at the target by the
user. For example:
db2 connect to QSAMPLE1
db2 -c alter table T2 add column val varchar(30));
db2 -c insert into ASN.IBMQREP_SIGNAL(signal_time,
signal_type,signal_subtype, signal_input_in, signal_state) Values
(CURRENT_TIMESTAMP,'CMD', 'ADDCOL', 'T20001;Val', 'P');
db2 commit
Stopping a subscription at the target during table maintenance
Imagine the following scenario: Queries against the ORDER table at Site 2 are running
slowly and you have determined that a REORG will solve the problem. This table exists
at both Site 1 and Site 2 and is kept in synch by Q Replication. You can continue to run
the application at Site 1 while doing the REORG at Site 2. You temporarily suspend
applying changes for this table at Site 2, allowing for this offline REORG operation while
the replication process continues for all other tables. The changes for this table are
safeguarded in a spill queue that is automatically created by the Q Apply program. The
queue is automatically removed once the backlog it contains is applied after you resume
the subscription.
Target Site (Site 2 in Toronto):
# Tell Apply to spill incoming changes for this table (stop
applying them)
asnqacmd apply_server=QSAMPLE
spillsub=ASN.SAMPLE1_TO_ASN.SAMPLE2.DATA:ORDER0001
# Do your table maintenance
db2 REORG table ORDER
# Simulate activity at the source system, by inserting a row
db2 connect to QSAMPLE1
db2 insert into ORDER values (150, 'Big Shovel', 10);
# The new row for id 150 was spilled at the target
# Once REORG is finished, resume replication to this table:
asnqacmd apply_server=QSAMPLE2
resumesub=ASN.SAMPLE1_TO_ASN.SAMPLE2.DATA:ORDER0001
# Now verify that the spilled row has been applied
db2 connect to QSAMPLE2
db2 select from ORDER where id = '150'
WebSphere MQ unavailable
Your WebSphere MQ servers will most likely always be available to your Q Capture and
Q Apply programs. However, if a WebSphere MQ server is unavailable, the associated Q
IBM DB2 pureScale with Q Replication
Page 29
Capture and Q Apply program stops after saving its state for later restart. You can see
whether Q Capture or Q Apply went down by using the Q Replication Dashboard, or
you can set up notification with the asnmon alert monitor program. In our scenario we
rely on Tivoli System Automation to monitor the replication programs and automatically
restart them as explained in “Appendix 5: Automating Q Replication Failover using
TSA.”
In the event of a catastrophic failure where all WebSphere MQ log and data disks are
unrecoverable, it is not necessary to have disaster recovery in place for the WebSphere
MQ data. Q Replication can handle the situation after new queue managers are created
on different disks. The Q Capture program can simply be restarted to read back from the
log, recapture, and resend any lost messages. This is accomplished by invoking Q
Capture with the lsn and maxcmtseq parameters to override its saved restart
information. For more information, see Starting Q Capture from a known point in the
DB2 log in the DB2 Information Center.
In the case of switching Q Replication to a different member, we have created the queue
managers as multi-instance. However, the standby queue manager must be started only
when needed. With multi-instance queue managers, the active queue manager holds a
lock on the queue manager data to ensure that only one queue manager is active.
Network unavailability and transmission queue full
Q Capture has the ability to transparently handle temporary network unavailability. If a
transmission queue fills up because network bandwidth is low or the network is
unavailable, Q Capture will automatically throttle back based on the value of its
qfull_num_retries parameter, after which it will come down. Once restarted after
network issues are solved, Q Capture picks up right where it was stopped.
DB2 pureScale member outage
Q Replication is unaffected by member failure or restart, except at the member where Q
Replication runs. Adding or removing other members is completely transparent to Q
Replication.
Outage of the member where Q Replication runs
If the member where the replication components, the queue manager, Q Capture, and Q
Apply programs run goes down, you are protected from data loss, because the
WebSphere MQ logs and queue data reside on the shared disk.
After a failure of the host member, a script can be used to restart the replication
components on any other member where WebSphere MQ was installed and configured.
The Q Capture and Q Apply programs must be restarted on the same member where the
queue manager is failed over:
strmqm –x QSAMPLE1
asnqcap CAPTURE_SERVER=QSAMPLE1 startmode=WARMSI&
asnqapp apply_server=QSAMPLE1 &
IBM DB2 pureScale with Q Replication
Page 30
Automating Q Replication failover using Tivoli
System Automation
We leverage the Tivoli System Automation (TSA) function that is provided with DB2 to
automate the restarting of Q Replication in case of failures.
When using TSA, you manage replication operations using commands on a defined
resources group, rather than explicitly invoking the asnqcap and asnqapp programs. For
example, given a definition that groups Q Capture, Q Apply, and the queue manager as a
named resource group called 'ibmqrep-rg', you start replication at that site by taking this
resource group online with the TSA change resource group (chrg) command. This can be
done from any of the members of the site:
chrg –o online ibmqrep-rg
and you stop replication by taking the resource group offline:
chrg –o offline ibmqrep-rg
Once the resource group is offline, you can take the resource group out of TSA control by
locking it, which allows you to stop and start the programs outside of TSA. This is
sometimes useful for testing purposes, such as when experimenting with different Q
Capture run-time parameters.
rgreq -o lock ibmqrep-rg
To unlock a resource group and give control back to TSA with the following command:
rgreq –o unlock ibmqrep-rg
Configuring TSA requires:
1) Defining resources and the dependencies between the resources. For
example, the Q Replication programs have a dependency on the queue
manager, and a dependency on DB2 being available.
2) Configuring TSA also requires the use of scripts to monitor that the
replication programs are operational, and to start/stop them.
We provide the resource definitions and scripts in “Appendix 5: Automating Q
Replication Failover using .” The following diagram shows the dependency relationship
that we use for automating Q Replication failover in a pureScale environment:
IBM DB2 pureScale with Q Replication
Page 31
Figure 9: Dependency relationship for automating Q Replication failover
Shadow DB2
Instance 0
Shadow DB2
Instance 1
Equivalency of Shadow resources that monitor the db2 processes
DependsOn
Q Manager
DependsOn
Q Capture
DependsOn
Q Apply
The primary dependency of Q Replication is on DB2. Q Replication can use DB2 from
any member, so we define shadow resources to monitor DB2 instances. If a DB2 member
goes down on a node in which the queue manager and the Q Replication servers are
running, the servers (queue manager and the Q Replication servers) are started on
another node where a DB2 member instance is still running. If the node where the queue
manager and Q Replication programs are running goes down, they are restarted on
another node.
Monitoring, tuning, and troubleshooting replication
The Q Capture and Q Apply programs maintain a number of database tables to log
important information about the replication process. These include monitor tables for
performance metrics, trace tables for program information, and an exception table for
data conflicts. Over the years, many database administrators have developed tools that
leverage the accessibility of this information; you can always trust a good DBA to find
many uses for information that is so readily accessible and useful in DB2 tables!
IBM DB2 pureScale with Q Replication
Page 32
IBM tools also leverage those monitor tables, and all other tables that the Q Capture and
Q Apply programs update. In addition, IBM provides an extensive toolset to help
manage a multi-site replication configuration.
Command-line utilities
Q Replication provides tools to monitor the replication environment while it is running
and to send alerts when certain user-defined conditions occur:
•
The asnmon program can define alerts and send email notifications when
triggered, for example when replication latency exceeds a set threshold or when
the Q Capture program unexpectedly fails. See Monitoring replication with the
Replication Alert Monitor in the DB2 Information Center for more details.
•
The asntdiff program compares two tables from each site and reports their
differences as a series of update, delete, and insert statements that would be
required to re-synchronize, even in the presence of replication subsetting (only a
subset of the columns are replicated) or transformations (data is transformed
when replicated). The asntrep tool then uses the output generated by asntdiff to
re-synchronize the tables. See Comparing and repairing tables in the DB2
Information Center for more information.
•
The asnqmfmt program enables you to read the messages in the Q Replication
queues. See asnqmfmt: Formatting and viewing Q Replication and event
publishing message in the DB2 Information center for more information.
Graphical utilities
The Q Replication Dashboard is a web-browser-based application. The dashboard taps
into the wealth of historical and near-real-time information provided by the Q Capture
and Q Apply programs to provide end-to-end monitoring of the replication
configuration, and provides tuning aids in the form of advisors. Follow the instructions
in “Appendix 4: Installing and Configuring the Q Replication Dashboard.” You can now
visualize your replication process end-to-end, define alerts, generate reports, and launch
its advisors for tuning your system over time.
IBM DB2 pureScale with Q Replication
Page 33
Figure 10. Dashboard health summary page
Click the Live Graph tab to see the bidirectional replication configuration (SAMPLE1 to
QSAMPLE2) and (SAMPLE2 to QSAMPLE1) side by side as shown below. The live
graph shows the Log latency, End-to-End latency and the Q Capture and Q Apply
throughput (rows/sec) for both replication directions.
IBM DB2 pureScale with Q Replication
Page 34
Figure 11. Live Graph page
If the member that the Q Replication Dashboard is connected to comes down, the
dashboard displays “No database connectivity.” When the member is brought back up,
the dashboard reconnects automatically and restarts monitoring.
Conclusion
The DB2 pureScale Feature for Enterprise Server Edition is designed to leverage the
PowerHA pureScale server and RDMA technologies, allowing it to scale to meet the
growing and dynamic needs of different organizations. Additional members can be
added to the DB2 pureScale environment without any impact to existing applications to
meet the demands of peak processing times. The DB2 pureScale Feature automatically
balances workloads across all cluster members to take full advantage of the additional
processing capacity, without needing any changes in the application. If a DB2 member
fails, applications are automatically re-routed among the other active members until the
failed member comes back online. The design and capabilities of the DB2 pureScale
Feature reduce the total cost of ownership compared with other solutions via a simplified
deployment and maintenance model.
By adding Q Replication to the solution, zero-downtime can be realized through major
migrations, upgrades, and maintenance, and application response times can be improved
by balancing the workload of contentious applications between two (or more) active
databases.
Q Replication can be used both for meeting Continuous Availability requirements that
include Disaster Recovery and High-Availability, and for feeding various data stores and
warehousing systems to allow reporting and analytics over near real-time data.
IBM DB2 pureScale with Q Replication
Page 35
Contributors
Serge Boivin
DB2 Information Development
Matthew Huras
Lead Architect, DB2 LUW
Dell Burner
InfoSphere Replication Server Information
Development
IBM DB2 pureScale with Q Replication
Page 36
Appendix 1: Sample commands for preparing the
databases for replication
Follow these steps to create the source and target databases and configure them for Q
Replication:
1. Create and catalog the DB2 databases
In this paper, we create a test database named QSAMPLE at both the source and target.
It is common for the database to have the same name at both the source and target,
because it allows an application to run unchanged at either site. To create the new
databases:
# log in to Site 1
db2 create database QSAMPLE on /db2fs/data
# log in to Site 2
db2 create database QSAMPLE on /db2fs/data
Q Replication requires the database names to be unique for each site across the
replication configuration. This is necessary because the Q Apply program must be able to
connect to both the source and target databases at the same time for certain operations,
for example when loading a table for the first time at the target. The replication
administrative interfaces (ASNCLP, Q Replication Dashboard, Replication Center) also
must connect to both sites for configuration and operations.
Because the source and target database names are both QSAMPLE, we will create
database aliases QSAMPLE1 (for QSAMPLE on system 1, the source database) and
QSAMPLE2 (for QSAMPLE on system 2, the target database). We pick member 0 on each
site for setting up the alias, any member could be picked. Member 0 at Site 1 is coralx501,
and coralx506 at Site 2. Run the following set of commands to catalog (create aliases) at
Site 1 in San Francisco:
db2 uncatalog database QSAMPLE1
db2 uncatalog database QSAMPLE2
db2 uncatalog node Node2
db2 catalog tcpip node Node2 remote coralx506.torolab.ibm.com
server 33079
db2 catalog database QSAMPLE as QSAMPLE2 at node Node2
authentication server
db2 catalog db QSAMPLE as QSAMPLE1 on /db2fs/data
Run the following commands to catalog the databases at Site 2 in Toronto:
db2 uncatalog database QSAMPLE1
db2 uncatalog database QSAMPLE2
db2 uncatalog node Node1
IBM DB2 pureScale with Q Replication
Page 37
db2 catalog tcpip node Node1 remote coralx501.torolab.ibm.com
server 33065
db2 catalog database QSAMPLE as QSAMPLE1 at node Node1
authentication server
db2 catalog database QSAMPLE as QSAMPLE2 on /db2fs/data2
You can now test your connections, from each site in turn, issue:
db2 connect to QSAMPLE1
db2 connect to QSAMPLE2
Please note that if at either site you get a TCP/IP communication error like this:
SQL30081N A communication error has been detected. Communication protocol
being used: "TCP/IP". Communication API being used: "SOCKETS". Location
where the error was detected: "x.xx.xx.xxx". Communication function detecting
the error: "connect". Protocol specific error code(s): "111", "*", "*".
SQLSTATE=08001
Then you might need to set or add the communication protocol (TCP/IP) on the remote
instance that you are trying to connect to, by running the following command:
db2set DB2COMM=TCPIP
You also need SVCENAME to be set in the database manager configuration at each site::
db2 update dbm cfg using SVCENAME 33079
The DB2 tcp/ip port number can be obtained from /etc/services file. For example, in our
/etc/services file, we have the following entry:
xjcolaco
33079/tcp
2. Creating a new target database from an existing source
database
You can use an existing database as the source, and then create an empty copy of this
database using the db2look command, which generates a file containing the data
definition language (DDL):
db2look –d QSAMPLE –e –l –x > ddl.out
You then create the source tables on the target side by running the generated DDL file:
db2 –tvf ddl.out
IBM DB2 pureScale with Q Replication
Page 38
3. Enable QSAMPLE for replication - LOGARCHMETH1
parameter
Before starting the replication servers, make sure that the LOGARCHMETH1 database
configuration parameter is set to LOGRETAIN on both the QSAMPLE1 and QSAMPLE2
databases. This parameter determines whether active log files are retained and available
for roll-forward recovery. The logs must be retained because Q Replication reads the logs
asynchronously; it might also be stopped temporarily. The Q Capture program needs to
be able to retrieve log records for transactions that have already being committed. For
this reason, circular logging cannot be used with replication. Setting LOGARCHMETH1
puts the database in backup pending mode and a backup will be required to use it.
To set LOGARCHMETH1 to LOGRETAIN and backup the database, run the following
commands:
db2 connect to QSAMPLE1
db2 update db cfg for QSAMPLE1 using LOGARCHMETH1 LOGRETAIN
db2 deactivate db QSAMPLE1
db2 backup db QSAMPLE1 to /dev/null -- Discard backup, it is just
to get DB out of backup pending state
db2 activate database QSAMPLE1
db2 connect to QSAMPLE2
db2 update db cfg for QSAMPLE2 using LOGARCHMETH1 LOGRETAIN
db2 deactivate database QSAMPLE2
db2 backup db QSAMPLE2 to /dev/null -- Discard backup, it is just
to get DB out of backup pending state
db2 activate database QSAMPLE2
4. Creating tables to replicate
The source database is called QSAMPLE1 via an alias. We create and populate two tables
in this database. We will let replication automatically create and load these tables on Site
2.
# Connect to Site 1 in San Francisco
db2 connect to QSAMPLE1
db2 "create table ORDER(id int not null primary key, description
varchar(20), quantity int)"
db2 "create table INVENTORY(id int not null primary key, quantity
int, location varchar(128))"
We insert some data into the tables that will later be replicated. The following nifty SQL
inserts 10 rows with different values for the key. You can modify it to insert thousands or
more if you wish.
db2 "insert into ORDER (id,description, quantity) with tids
(newid) as (values(1) union all select t.newid +1 from tids t
where t.newid <10
) select f.newid,'Snow Shovel',f.newid from tids f"
db2 "insert into INVENTORY (id, quantity, location) with tids
IBM DB2 pureScale with Q Replication
Page 39
(newid) as (values(1) union all select t.newid +1 from tids t
where t.newid <10
) select f.newid, f.newid, 'Warehouse' from tids f"
-- Insert one more row, for good measure
db2 "insert into ORDER (id, description, quantity) values (11,
'New snow shovel', 11)"
db2 "insert into INVENTORY (id, quantity, location) values (11,
11, ‘Store’)"
5. Setting Data Capture Changes (DCC) on the tables to replicate
Setting DATA CAPTURE CHANGES for a table instructs DB2 to perform additional
logging, and allows the Q Capture programs to be able to read the DB2 log records. The
log record for a table without this setting will not be visible to the Q Capture program.
db2 connect to QSAMPLE1;
db2 alter table ORDER data capture changes
db2 alter table INVENTORY data capture changes
If you were setting up bidirectional replication for tables that already exist, you would
need to enable DATA CAPTURE CHANGES on both sites. But, in our example, the
target tables will be created by the replication administration tools, which will create
them automatically with DATA CAPTURE CHANGES.
Appendix 2: Downloading and installing WebSphere
MQ
Please note that these commands are for a Linux system. For installing on other operating
systems, please refer to the “Quick Beginnings” section for that operating system:
http://publib.boulder.ibm.com/infocenter/wmqv7/v7r0/index.jsp
1. Download WebSphere MQ 7.0.1-3 from
http://www.ibm.com/developerworks/downloads/ws/wmq/ . You might need to register
at the site to obtain a user ID.
2. unzip the package by running the following command
gzip –d CZJ3ZML.tar.gz
3. untar the tar file by running the following command
tar xvf CZJ3ZML.tar
4. Accept the license:
mqlicense.sh -accept
5. Run the rpm installer to install WebSphere MQ 7.0.1.3 on the first member (as root):
rpm -ivh MQSeriesRuntime-7.0.1-3.x86_64.rpm MQSeriesServer-7.0.13.x86_64.rpm MQSeriesSamples-7.0.1-3.x86_64.rpm
Repeat these steps to install WebSphere MQ on all members of the pureScale system on
Site 1 (coralx501 and coralx502 in San Francisco) and Site 2 (coralx506 and coralx507 in
Toronto), where a queue manager might need to run.
IBM DB2 pureScale with Q Replication
Page 40
After you have installed WebSphere MQ, add the DB2 instance owner to the queue
manager access group. By default, the WebSphere MQ installer automatically creates a
user named mqm and a group called mqm
usermod -G mqm db2userid
You need to add the DB2 instance owner to the mqm group so that the DB2 instance
owner can create a queue manager and run administration commands for the queue
manager on the source system.
You must run this command as root. If you do not have superuser authority, ask your
administrator to add the instance owner to the mqm group. Repeat this command on the
additional pureScale members where the queue manager runs. Make sure that the UID
(user ID) and GID (group ID) values for the mqm user and mqm group are the same on
all the pureScale members.
Appendix 3: Testing WebSphere MQ connectivity
We will use the sample WebSphere MQ programs amqsput and amqsget to put test
messages in a remote queue and get the test messages from the receive queue at the other
end. This test will validate the WebSphere MQ setup and connectivity of the local and
remote queue pairs between the queue managers.
To test the connectivity from San Francisco to Toronto, you put data on the send queue
by running the amqsput command on coralx501 (member 0 Site 1 in San Francisco),
which is where the queue manager runs:
rlogin coralx501
amqsput ASN.QSAMPLE1_TO_ASN.QSAMPLE2.DATA QSAMPLE1
ASN.QSAMPLE1_TO_ASN.QSAMPLE2.DATA is the name of the send queue in San
Francisco and QSAMPLE1 is the name of the queue manager in San Francisco. The
following messages are displayed:
QSAMPLE1 amqsput0 start
target queue is ASN.QSAMPLE1_TO_ASN.QSAMPLE2.DATA
Type some message text on one or more lines, and then press Enter twice. The following
message is displayed:
QSAMPLE1 amqsput0 end
Now you can test whether the test message that you typed was successfully delivered to
coralx506 in Toronto (the member where the queue manager runs in Toronto) by running
a get command at that end:
rlogin coralx506
amqsget ASN.QSAMPLE1_TO_ASN.QSAMPLE2.DATA QSAMPLE2
IBM DB2 pureScale with Q Replication
Page 41
ASN.QSAMPLE1_TO_ASN.QSAMPLE2.DATA is the name of the receive queue in Toronto,
and QSAMPLE2 is the name of the queue manager in Toronto. This command displays the
test message that you typed in San Francisco.
Repeat the test with the administration queues, and then for the other direction by
substituting the queue manager and queue names.
Appendix 4: Installing and configuring the Q
Replication Dashboard
You can download the Q Replication Dashboard from the following site: https://www304.ibm.com/support/docview.wss?uid=swg24023065. This download requires an IBM
ID. Reuse the ID you created for downloading WebSphere MQ or just register at the site.
Follow the installation instructions in the download package.
Here is a set of steps that you can follow to set up the dashboard.
1.
Start the dashboard from the Windows Start menu: All Programs > IBM Q
Replication Dashboard > Launch Q Replication Dashboard.
2.
Log in by using the default user ID of dashboarduser and default password of
dashboarduser.
3.
Before you can monitor a replication configuration, you need to create a
monitoring group. On the dashboard Welcome page, click Create a monitoring
group.
Figure 12. Q Replication Dashboard welcome page
4.
Click Add Server.
IBM DB2 pureScale with Q Replication
5.
Page 42
Enter information for connecting to the source database (member 0 on Site 1 San
Francisco). Currently the dashboard supports addition of only one member of a
multi-member pureScale DB2. The dashboard can retrieve the database alias
from the control tables when you click Auto Populate.
Figure 13. Q Replication Dashboard Add Server Details for QSAMPLE1
6.
Similarly add the target server information (pureScale member 0 on Site 2
Toronto).
Figure 14. Q Replication Dashboard Add Server details for QSAMPLE2
7.
Pick one server (either source QSAMPLE1 or target QSAMPLE2) and click Next.
The dashboard can get all the configuration information from either of the
servers.
IBM DB2 pureScale with Q Replication
Page 43
Figure 15. Selecting a server for the monitoring group
8.
Choose the schema under which the replication control tables are created. The
dashboard detects all of the schemas in that database. In our example, we have
only one schema (ASN). Click Next.
9.
Select the replication configuration in the configuration page. In our example, the
bidirectional replication configuration (B icon) between QSAMPLE1 and
QSAMPLE2 is pre-selected because it is the only configuration set up between
the servers. Note that the same two servers could also have a unidirectional or
peer-to-peer configuration.
Figure 16. Selecting a configuration
10. Finally, you can rename the monitoring group and change how often the
dashboard retrieves data from the servers. In this example, we leave the default
values and click Finish to complete the creation of a Monitoring Group.
11. You should see green, diamond-shaped icons that indicate no warnings in the
setup against the configurations, send queues, and Q subscriptions. A red square
would indicate that at least one error occurred, and you could click the icon to
drill down for diagnostic information.
IBM DB2 pureScale with Q Replication
Page 44
Appendix 5: Automating Q Replication Failover using
TSA
Follow the procedure in this section for each site.
Downloading and testing the scripts
•
Initialize the shell environment as described in Section 4, “Initialize the shell
environment for replication tools.”
•
Create a directory in the shared disk. This directory will be the location for the
wrapper scripts that start/stop and monitor the queue manager, Q Capture, and
Q Apply programs. These scripts should accessible from all the members in the
cluster. As root we create a directory called qrepTSA in the shared disk under
/db2fs/data/HA on Site 1, and change the ownership and file modes of the
directory:
mkdir /db2fs/data/HA/qrepTSA
chown -R jcolaco:pdxdb2 /db2fs/data/HA/qrepTSA
chmod -R ug+rwx /db2fs/data/HA/qrepTSA
•
Download all of the wrapper scripts from IBM developerWorks to start, stop,
and monitor the status of the servers (Q Capture, Q Apply, and the queue
manager) to this directory.
•
Test the scripts from both members. For example, from coralx501 run (as root):
IBMQRep_QM.ksh username start QSAMPLE1 verbose
This should start the queue manager QSAMPLE1. Similarly test the stop and
monitor options (the exit code for monitor script 1 for server up and 2 for server
down) to the IBMQRep_QM.ksh script. You should also be able to run the scripts
when logged in from coralx502 member.
•
Test the scripts to start/stop/monitor the Q Capture (IBMQRep_QCapture.ksh):
IBMQRep_QCapture.ksh jcolaco jcolaco start QSAMPLE1 ASN
verbose
Read the prolog in each script for the description of its invocation arguments.
•
Test the Q Apply server by running the script IBMQRep_QApply.ksh as root:
IBMQRep_QApply.ksh jcolaco jcolaco start QSAMPLE1 ASN
verbose
•
Test the script IBMQRep_DB2MemberInst.ksh, which monitors the status of the
DB2 member instance. Please note that this script will not start or stop the
IBM DB2 pureScale with Q Replication
Page 45
member instance, the return code for the server status is 1 if it is running and 2 if
it is stopped:
IBMQRep_DB2MemberInst.ksh status db2_jcolaco_0-rs verbose
db2_jcolaco_0-rs is the TSA resource name for our DB2 member instance (the
naming convention for the resource is db2_instancename_memberid-rs).
Setting up failover automation
From the command-line:
•
Run the Tivoli command lsrpnode to get the node information on both clusters.
In this tutorial, we are using the node names coralx501, coralx502 on
Site1/Cluster1, and coralx507 and coralx508 on Site2/Cluster2.
•
Run the IBMQRep_TSARes.ksh script to create all the TSA resources for Q
Replication failover automation. Please note that you have to update the start,
stop, and monitor commands for the script location directory, the command
arguments and the cluster node names.
•
Run the IBMQRep_TSARes.ksh script to create the TSA resources for
automating the Q Replication failover.
Please note that the verbose option is turned on in the IBMQRep_TSARes.ksh script so
that all of the resources and all of the TSA resource logging go to the system logs. You
can look at the system logs for more information to troubleshoot the setup. The Q
Capture and Q Apply server logs are created in the replication user’s home directory.
Once you have verified the TSA setup, you can remove the verbose options in
IBMQRep_TSARes.ksh in the start, stop, and the monitor commands in the script.
You can remove the resources by running the script IBMQRep_RemoveTSARes.ksh.
Testing Automation
Once the resources are created, you can enable/run the resources by bringing them online
by running the following commands to bring the entire ibmqrep-rg resource group
online:
chrg –o online ibmqrep-rg
To check the status of the resources, you can run the following command:
lsrg -m -g ibmqrep-rg
IBM DB2 pureScale with Q Replication
Page 46
Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other
countries. Consult your local IBM representative for information on the products and services
currently available in your area. Any reference to an IBM product, program, or service is not
intended to state or imply that only that IBM product, program, or service may be used. Any
functionally equivalent product, program, or service that does not infringe any IBM
intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in
this document. The furnishing of this document does not grant you any license to these
patents. You can send license inquiries, in writing, to:
IBM Director of Licensing
IBM Corporation
North Castle Drive
Armonk, NY 10504-1785
U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where
such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES
CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NONINFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do
not allow disclaimer of express or implied warranties in certain transactions, therefore, this
statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are
periodically made to the information herein; these changes will be incorporated in new
editions of the publication. IBM may make improvements and/or changes in the product(s)
and/or the program(s) described in this publication at any time without notice.
Any references in this information to non-IBM Web sites are provided for convenience only
and do not in any manner serve as an endorsement of those Web sites. The materials at
those Web sites are not part of the materials for this IBM product and use of those Web sites is
at your own risk.
IBM may use or distribute any of the information you supply in any way it believes
appropriate without incurring any obligation to you.
Any performance data contained herein was determined in a controlled environment.
Therefore, the results obtained in other operating environments may vary significantly. Some
measurements may have been made on development-level systems and there is no
guarantee that these measurements will be the same on generally available systems.
Furthermore, some measurements may have been estimated through extrapolation. Actual
results may vary. Users of this document should verify the applicable data for their specific
environment.
Information concerning non-IBM products was obtained from the suppliers of those products,
their published announcements or other publicly available sources. IBM has not tested those
products and cannot confirm the accuracy of performance, compatibility or any other
claims related to non-IBM products. Questions on the capabilities of non-IBM products should
be addressed to the suppliers of those products.
All statements regarding IBM's future direction or intent are subject to change or withdrawal
without notice, and represent goals and objectives only.
This information contains examples of data and reports used in daily business operations. To
illustrate them as completely as possible, the examples include the names of individuals,
IBM DB2 pureScale with Q Replication
Page 47
companies, brands, and products. All of these names are fictitious and any similarity to the
names and addresses used by an actual business enterprise is entirely coincidental.
COPYRIGHT LICENSE: Copyright IBM Corporation 2011. All Rights Reserved.
This information contains sample application programs in source language, which illustrate
programming techniques on various operating platforms. You may copy, modify, and
distribute these sample programs in any form without payment to IBM, for the purposes of
developing, using, marketing or distributing application programs conforming to the
application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions.
IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these
programs. The sample programs are provided "AS IS", without warranty of any kind. IBM shall
not be liable for any damages arising out of your use of the sample programs.
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International
Business Machines Corporation in the United States, other countries, or both. If these and
other IBM trademarked terms are marked on their first occurrence in this information with a
trademark symbol (® or ™), these symbols indicate U.S. registered or common law
trademarks owned by IBM at the time this information was published. Such trademarks may
also be registered or common law trademarks in other countries. A current list of IBM
trademarks is available on the Web at “Copyright and trademark information” at
www.ibm.com/legal/copytrade.shtml
Windows is a trademark of Microsoft Corporation in the United States, other countries, or
both.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both.
Other company, product, or service names may be trademarks or service marks of others.
Fly UP