...

EMC VPLEX WITH XTREMIO 2.4 PERFORMANCE CHARACTERISTICS, SIZING, TESTING, AND USE CASES

by user

on
Category: Documents
79

views

Report

Comments

Transcript

EMC VPLEX WITH XTREMIO 2.4 PERFORMANCE CHARACTERISTICS, SIZING, TESTING, AND USE CASES
White Paper
EMC VPLEX WITH XTREMIO 2.4
PERFORMANCE CHARACTERISTICS, SIZING, TESTING,
AND USE CASES
Abstract
This white paper describes performance characteristics,
sizing, benchmark testing, and use cases for EMC VPLEX
solutions with XtremIO All-Flash Storage.
Copyright © 2014 EMC Corporation. All Rights Reserved.
EMC believes the information in this publication is accurate of its publication date. The
information is subject to change without notice.
The information in this publication is provided “as is”. EMC Corporation makes no
representations or warranties of any kind with respect to the information in this
publication, and specifically disclaims implied warranties of merchantability or fitness for a
particular purpose.
Use, copying, and distribution of any EMC software described in this publication requires
an applicable software license.
For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks
on EMC.com.
All other trademarks used herein are the property of their respective owners.
Part Number H13540
EMC VPLEX WITH XTREMIO 2.4
2
Table of Contents
Executive summary........................................................................................................ 4
Audience ......................................................................................................................... 4
Introduction .................................................................................................................... 4
Flash is Different than Disk ....................................................................................... 5
Section1: VPLEX Architecture and Configuration Limits ............................................ 6
VPLEX hardware platform ........................................................................................ 6
VPLEX Witness ........................................................................................................ 6
VPLEX Management Console .................................................................................. 7
VPLEX VS2 Engine .................................................................................................. 7
VPLEX System Configuration Limits......................................................................... 8
Section 2: XtremIO Architecture and Configuration Limits....................................... 10
XtremIO System Description .................................................................................. 11
Scale-Out Architecture ........................................................................................... 12
Section 3: VPLEX with XtremIO Performance Characteristics.................................. 15
Performance Overview ........................................................................................... 15
Native XtremIO vs. VPLEX Local + XtremIO Testing and Results .......................... 16
Overall VPLEX-XtremIO Testing Observations ....................................................... 17
Native XtremIO vs. VPLEX Metro + XtremIO Testing and Results.......................... 23
Section 4: Sizing VPLEX-XtremIO Solutions .............................................................. 27
The BCSD Tool ...................................................................................................... 27
EMC Global Business Services -- VPLEX “Sizing as a Service” ............................. 27
Section 5: VPLEX Performance Checklist .................................................................. 29
Section 6: VPLEX + XtremIO Use Cases ..................................................................... 32
Introduction ............................................................................................................ 32
Active-active continuous availability and mobility plus advanced disaster recovery . 32
Changing class of service non-disruptively ............................................................. 33
Zero downtime, zero data loss ................................................................................ 33
Continuous data protection and remote replication ................................................. 33
Section 7: Benchmarking ............................................................................................ 34
VPLEX Performance Benchmarking Guidelines ..................................................... 37
Conclusion ................................................................................................................... 43
References .................................................................................................................... 44
EMC VPLEX WITH XTREMIO 2.4
3
Executive summary
In today’s increasingly demanding business environments, enterprises are being driven to
deliver responsive, continuously available applications that provide customers with an
uninterrupted user experience. There are also higher demands on IT infrastructure
performance and data availability. This environment is typically driven by:
•
High-transaction workloads
•
Time-critical applications and escalating service-level agreements
•
Turnkey third-party applications with high sensitivity for I/O responsiveness
•
Replication of application databases for use by supporting business processes such
as business intelligence (BI) reporting, testing, and development
•
Need for highly available architectures
In the past, businesses relied on traditional spinning disk physical storage to address their all
of their needs. Recent developments such as sever virtualization, All-Flash Arrays (AFAs),
and the growth of multiple sites throughout a businesses’ network have placed new demands
on how storage is managed and lead to greater storage performance expectations.
To keep pace with these new requirements, storage solutions must evolve to deliver new
levels of performance and new methods of freeing data from a physical device. Storage
must be able to connect to virtual environments and still provide automation, integration with
existing infrastructure, high performance, cost efficiency, availability, and security.
EMC VPLEX combined with XtremIO address these new challenges and completely change
the way IT is managed and delivered – particularly when deployed with server virtualization.
By enabling new models for operating and managing IT, storage resources can be federated
– pooled and made to cooperate through the stack—with the ability to dynamically move
applications and data within and across geographies and service providers.
Audience
This white paper is intended for storage, network and system administrators who desire a
deeper understanding of the performance aspects of EMC VPLEX with XtremIO, the sizing
best practices, and/or the planning considerations for the future growth of their VPLEX with
XtremIO virtual storage environment(s). This document outlines how VPLEX technology
interacts with XtremIO environments, how existing XtremIO environments might be impacted
VPLEX technology, and how to apply best practices through basic guidelines and
troubleshooting techniques as uncovered by EMC VPLEX and XtremIO performance
engineering and EMC field experiences.
Introduction
We ask readers to use the information presented to understand the performance
characteristics of VPLEX with XtremIO All-Flash arrays. The goal is to provide concrete
performance data so informed judgments about overall solution capabilities can be made. If
EMC VPLEX WITH XTREMIO 2.4
4
there are questions about any of the content in this document please contact EMC Sales or
Technical Support representatives.
Flash is Different than Disk
Traditional disk subsystems are optimized to avoid random access. Array controllers have
evolved at the rate of Moore’s Law. Over the last 20 years CPU power and memory density
has improved by a factor of 4,000, while disk I/O operations per second (IOPS) have only
tripled. To make up for this gap, storage engineers use complex algorithms to trade CPU
cycles and memory capacity for fewer accesses to the underlying disk subsystem of their
arrays.
Testing traditional storage focuses on characterizing array controllers and caching. Those are
differentiators for spinning disk arrays, and all tend to be limited by the same set of
commodity disk technologies when it comes to actually accessing the disk.
Flash is an inherently random access medium, and each SSD can deliver more than a
hundred times the IOPS of enterprise class hard disk drives. Enterprise SSDs deliver
consistent low latency regardless of I/O type, access pattern, or block range. This enables
new and improved data services that are only now starting to mature in the marketplace.
However, SSD reads are faster than writes. Flash also wears out, and endurance will be
compromised if the same flash locations are repeatedly written. This calls for a fundamental
rethinking of how controllers for AFAs should be designed, and consequently how those
designs should be tested to highlight strengths and weaknesses.
This whitepaper explains important considerations when evaluating, sizing, and deploying
VPLEX with XtremIO AFAs. XtremIO uses new technology with fewer mature practices to aid
evaluation, and flash can behave in unfamiliar ways. The reader will be in a stronger position
to make the right design choices after completing this paper.
EMC VPLEX WITH XTREMIO 2.4
5
Section1: VPLEX Architecture and Configuration Limits
VPLEX hardware platform
A VPLEX system is composed of one or two VPLEX clusters: one cluster for VPLEX Local
systems and two clusters for VPLEX Metro and VPLEX Geo systems. These clusters
provide the VPLEX AccessAnywhere capabilities.
Each VPLEX cluster consists of:
•
A VPLEX Management Console
•
One, two, or four engines
•
One standby power supply for each engine
In configurations with more than one engine, the cluster also contains:
•
A pair of Fibre Channel switches
•
An uninterruptible power supply for each Fibre Channel switch
•
As engines are added, cache, CPU, front-end, back-end, and wan-com connectivity
capacity are increased as indicated in Table 2 below.
VPLEX Witness
VPLEX Metro and VPLEX Geo systems optionally include a Witness. As illustrated in Figure
1, VPLEX Witness is implemented as a virtual machine and is deployed in a separate (third)
failure domain from two VPLEX clusters. The Witness is used to improve application
availability in the presence of site failures and inter-cluster communication loss.
Failure Domain #3
VPLEX Witness
Cluster A
Cluster B
IP Management
Network
Inter-cluster
Network A
Inter-cluster
Network B
Failure Domain
#1
Failure Domain
#2
Figure 1: VPLEX with 3 Independent Failure Domains
EMC VPLEX WITH XTREMIO 2.4
6
VPLEX Management Console
The VPLEX Management Console is a 1U server in the VPLEX cabinet. This server provides
the management interfaces to VPLEX—hosting the VPLEX web server process that serves
the VPLEX GUI and REST-based web services interface, as well as the command line
interface (CLI) service. This server’s power is backed up by a UPS in dual and quad engine
configurations.
In the VPLEX Metro and VPLEX Geo configurations, the VPLEX Management Consoles of
each cluster are inter-connected using a virtual private network (VPN) that allows for remote
cluster management from a local VPLEX Management Console. When the system is
deployed with a VPLEX Witness, the VPN is extended to include the Witness as well.
VPLEX VS2 Engine
A VPLEX VS2 Engine is a chassis containing two directors, redundant power supplies, fans,
I/O modules, and management modules. The directors are the workhorse components of the
system and are responsible for processing I/O requests from the hosts, serving and
maintaining data in the distributed cache, providing the virtual-to-physical I/O translations,
and interacting with the storage arrays to service I/O.
A VPLEX VS2 Engine has 10 I/O modules, with five allocated to each director. Each director
has one four-port 8 Gb/s Fibre Channel I/O module used for front-end SAN (host)
connectivity and one four-port 8 Gb/s Fibre Channel I/O module used for back-end SAN
(storage array) connectivity. Each of these modules has 40 Gb/s effective PCI bandwidth to
the CPUs of their corresponding director. A third I/O module, called the WAN COM module,
is used for inter-cluster communication. Two variants of this module are offered, one fourport 8 Gb/s Fibre Channel module and one two-port 10 Gb/s Ethernet module. The fourth I/O
module provides two ports of 8 Gb/s Fibre Channel connectivity for intra-cluster
communication. The fifth I/O module for each director is reserved for future use.
Figure 2: VPLEX VS2 Engine Layout
The VS2 engine uses N+1 cooling and power. Cooling is accomplished through two
independent fans for each director, four fans total in the entire enclosure. The fans are
integrated into the power supplies and provide front-to-rear cooling. The engine enclosure
houses four redundant power supplies that are each capable of providing full power to the
chassis. Redundant management modules provide IP connectivity to the directors from the
EMC VPLEX WITH XTREMIO 2.4
7
management console that is provided with each cluster. Two private IP subnets provide
redundant IP connectivity between the directors of a cluster and the cluster’s management
console.
Each engine is supported by a redundant standby power supply unit that provides power to
ride through transient power-loss and support write-cache vaulting.
Clusters containing two or more engines are fitted with a pair of Fibre Channel switches that
provide redundant Fibre Channel connectivity that support intra-cluster communication
between the directors. Each Fibre Channel switch is backed by a dedicated uninterruptible
power supply (UPS) that provides support for riding through transient power loss.
VPLEX System Configuration Limits
Capacity
Local
Maximum virtualized capacity
Metro
No Known Limit
Geo
No Known Limit
No Known Limit
Maximum virtual volumes
8,000
16,000
16,000
Maximum storage
elements
8,000
16,000
16,000
Minimum/maximum virtual
volume size
100MB/32TB
100MB/32TB
No VPLEX Limit
/ 32TB
No VPLEX Limit /
32TB
Minimum/maximum
storage volume size
IT Nexus Per Cluster
3200
3200
100MB/32TB
No VPLEX Limit /
32TB
400
Table 1
Engine Type
VPLEX VS1
VPLEX VS2
Model
Cache
[GB]
FC speed
[Gb/s]
Engines
FC Ports
Announced
Single
64
8
1
32
10-May-10
Dual
128
8
2
64
10-May-10
Quad
256
8
4
128
10-May-10
Single
72
8
1
16
23-May-11
Dual
144
8
2
32
23-May-11
Quad
288
8
4
64
23-May-11
Table 2
Table 1 and Table 2 show the current limits and hardware specifications for the VPLEX VS1
and VS2 hardware versions. Although the VS2 engines have half the number of ports as
VS1 the actual system throughput is improved as each VS2 port can supply full line rate (8
EMC VPLEX WITH XTREMIO 2.4
8
Gbps) of throughput whereas the VS1 ports are over-subscribed. Several of the VPLEX
maximums are determined by the limits of the externally connected physical storage frames
and therefore unlimited in terms of VPLEX itself. The latest configuration limits are published
in the GeoSynchrony 5.4 Release Notes which are available at http://support.emc.com
EMC VPLEX WITH XTREMIO 2.4
9
Section 2: XtremIO Architecture and Configuration Limits
The XtremIO Storage Array is an all-flash system, based on a scale-out architecture. The system
uses building blocks, called X-Bricks, which can be clustered together, as shown in Figure 3.
The system operation is controlled via a stand-alone dedicated Linux-based server, called the
XtremIO Management Server (XMS). Each XtremIO cluster requires its own XMS host, which can
be either a physical or a virtual server. The array continues operating if it is disconnected from the
XMS, but cannot be configured or monitored.
XtremIO's array architecture is specifically designed to deliver the full performance potential of
flash, while linearly scaling all resources such as CPU, RAM, SSDs, and host ports in a balanced
manner. This allows the array to achieve any desired performance level, while maintaining
consistency of performance that is critical to predictable application behavior.
The XtremIO Storage Array provides a very high level of performance that is consistent over time,
system conditions and access patterns. It is designed for high granularity true random I/O. The
cluster's performance level is not affected by its capacity utilization level, number of volumes, or
aging effects.
Due to its content-aware storage architecture, XtremIO provides:
•
•
•
•
•
Even distribution of data blocks, inherently leading to maximum performance and minimal
flash wear
Even distribution of metadata
No data or metadata hotspots
Easy setup and no tuning
Advanced storage functionality, including Inline Data Reduction (deduplication and data
compression), thin provisioning, advanced data protection (XDP), snapshots, and more
EMC VPLEX WITH XTREMIO 2.4
10
XtremIO System Description
X-Brick
An X-Brick is the basic building block of an XtremIO array.
Figure 3: XtremIO X-Brick
Each X-Brick is comprised of:
•
•
•
One 2U Disk Array Enclosure (DAE), containing:
o 25 eMLC SSDs (standard X-Brick) or 13 eMLC SSDs (10TB Starter X-Brick [5TB])
o Two redundant power supply units (PSUs)
o Two redundant SAS interconnect modules
One Battery Backup Unit
Two 1U Storage Controllers (redundant storage processors)
Each Storage Controller includes:
o Two redundant power supply units (PSUs)
o Two 8Gb/s Fibre Channel (FC) ports
o Two 10GbE iSCSI ports
o Two 40Gb/s Infiniband ports
o One 1Gb/s management/IPMI port
Feature
Specification (per X-Brick)
Physical
• 6U for a single X-Brick configuration
• 25 x eMLC Flash SSDs (standard X-Brick) or 13 x eMLC Flash
SSDs (10TB Starter X-Brick [5TB])
• Redundant
• Hot swap components
• No single point of failure (SPOF)
Symmetrical Active/Active –
Any volume can be accessed in parallel from any target port on any
controller with equivalent performance. There is no need for ALUA
• 4 x 8Gb/s FC
• 4 x 10Gb/s Ethernet iSCSI
High Availability
Host Access
Host Ports
EMC VPLEX WITH XTREMIO 2.4
11
Usable Capacity
• For a 10TB Starter X-Brick (5TB) type:
- 3.16TB (13 SSDs, with no data reduction)
- 6.99TB (25 SSDs, with no data reduction)
• For a 10TB X-Brick type:
- 7.47TB (with no data reduction)
• For a 20TB X-Brick type:
- 14.9TB (with no data reduction)
Note: Maximum logical capacity will be higher depending on the data
reduction rates on the application data.
Scale-Out Architecture
An XtremIO storage system can include a single X-Brick or a cluster of multiple X-Bricks, as shown
below.
Figure 4: XtremIO Cluster Configurations
With clusters of two or more X-Bricks, XtremIO array uses a redundant 40Gb/s QDR Infiniband
network for back-end connectivity between the Storage Controllers, ensuring a highly available,
ultra-low latency network. The Infiniband network is a fully managed component of the XtremIO
array, and administrators of XtremIO arrays do not need to have specialized skills in Infiniband
technology.
A single X-Brick cluster consists of:
• One X-Brick
• One additional Battery Backup Unit
EMC VPLEX WITH XTREMIO 2.4
12
A cluster of multiple X-Bricks consists of:
• Two or four X-Bricks
• Two Infiniband Switches
Table 3: Infiniband Switch Requirements per X-Brick
System Architecture
XtremIO works like any other block-based storage array and integrates with existing SANs, with a
choice of 8Gb/s Fibre Channel and 10Gb/s Ethernet iSCSI (SFP+) connectivity to the hosts.
However, unlike other block arrays, XtremIO is a purpose-built flash storage system, designed to
deliver the ultimate in performance, ease-of-use and advanced data management services. Each
Storage Controller within the XtremIO array runs a specially customized lightweight Linux
distribution as the base platform. The XtremIO Operating System (XIOS), runs on top of Linux and
handles all activities within a Storage Controller, as shown in Figure 5. XIOS is optimized for
handling high I/O rates and manages the system's functional modules, the RDMA over Infiniband
operations, monitoring and memory pools.
Figure 5: XIOS Architecture
XIOS has a proprietary process-scheduling-and-handling algorithm, which is designed to meet the
specific requirements of the content-aware, low latency, and high performance storage subsystem.
EMC VPLEX WITH XTREMIO 2.4
13
XIOS provides:
•
•
•
•
•
Low-latency scheduling – to enable efficient context switching of sub-processes, optimize
scheduling and minimize wait time
Linear CPU scalability – to enable full exploitation of any CPU resources, including multicore CPUs
Limited CPU inter-core sync – to optimize the inter-sub-process communication and data
transfer
No CPU inter-socket sync – to minimize synchronization tasks and dependencies between
the sub-processes that run on different sockets
Cache-line awareness – to optimize latency and data access
The Storage Controllers on each X-Brick connect to the disk array enclosure (DAE) that is attached
to them via redundant SAS interconnects. The Storage Controllers are also connected to a
redundant and highly available Infiniband fabric. Regardless of which Storage Controller receives
an I/O request from a host, multiple Storage Controllers on multiple X-Bricks cooperate to process
the request. The data layout in the XtremIO system ensures that all components inherently share
the load and participate evenly in I/O operations.
EMC VPLEX WITH XTREMIO 2.4
14
Section 3: VPLEX with XtremIO Performance
Characteristics
Performance Overview
Understanding VPLEX overhead
In general, with VPLEX's large per-director cache, host reads with VPLEX are comparable to
native XtremIO read response times. Host writes on the other hand, will follow VPLEX's
write-through caching model on VPLEX Local and Metro and will inevitably have slightly
higher latency than native XtremIO latency.
There are several factors involved in determining if and when latency is added by VPLEX.
Factors such as host I/O dispensation size, I/O pattern (random or sequential), I/O type (read
or write), VPLEX internal queue congestion, and SAN congestion will play a role in whether
or not latency is introduced by VPLEX. In real world production environments, however,
what do all of these factors add up to? Let’s take a look at the average latency impact. We
can break these latencies into the following 3 categories based on the type of host IO and
whether or not the data resides in VPLEX cache:
•
•
•
VPLEX read cache hits
VPLEX read cache misses
VPLEX writes
For VPLEX read cache hits, the VPLEX read response time typically ranges from 85-150
microseconds depending on the overall VPLEX system load and I/O size. This is very much
in line with the read response time the XtremIO AFA provides natively. For local and
distributed devices (Metro) VPLEX adds a small amount of latency to each VPLEX read
cache miss -- in the range of 200-400 microseconds. It’s important to note that VPLEX Metro
read miss performance is the same as VPLEX local because reads are retrieved locally from
the XtremIO array.
For host writes the added latency will vary based on whether or not the device is local or
distributed (Metro). Typical additional write latency is approximately 200-600 microseconds
for local devices. From VPLEX Metro distributed devices the additional write latency is the
WAN RTT (latency between sites) + 200-600 microseconds. Here the benefits of nondisruptive mobility or having a second copy data within the same datacenter (on a different
AFA) or a copy at a remote data center outweigh the performance impact of Metro’s writethrough caching architecture.
These latency values will vary slightly depending on the factors mentioned earlier. For
example, if there are large block IO requests which must be broken up into smaller parts
(based on VPLEX or individual array capabilities) and then written serially in smaller pieces to
the storage array. Since XtremIO write latency is not impacted by the overall workload the
array is sustaining, the response time on writes from XtremIO will remain relatively consistent
across the IO workload range. The additive cache from VPLEX may offload a portion of read
IO from the XtremIO array, thereby freeing up array resources to take on additional write IO
EMC VPLEX WITH XTREMIO 2.4
15
when necessary. Additional discussion on this topic is provided later in the subsequent host
and storage sections.
Native XtremIO vs. VPLEX Local + XtremIO Testing and Results
Native XtremIO performance tests use direct fibre channel connections between one or more
hosts and the XtremIO array. VPLEX Local testing inserts VPLEX in the path between the
hosts and the XtremIO array.
VPLEX Configuration
Before a VPLEX system is brought into testing or production it is important to follow the best
practices outlined in the series of VPLEX technical notes entitled Implementation and
Planning Best Practices Guides available at https://support.emc.com
Multi-pathing Configuration
For testing conducted for this paper, EMC PowerPath® was installed on each host and
was configured to use adaptive policy. Because the testing load-balances the benchmark
workloads as evenly as possible, workload skew does not impact the results of using
differing host multi-pathing configurations. Instead the significance between different
multi-pathing configurations is the amount of inter-director messaging that occurs within
VPLEX. The greater the number of directors that each host is connected to the greater
the level of inter-director messaging can occur.
Volume Configuration
VPLEX local RAID 0 devices are used in testing. The virtual volumes are mapped 1:1 to
storage volumes exported from the XtremIO array. VPLEX Local RAID 1 tests use two
XtremIO X-Bricks.
Simulated Application Profiles
The following simulated application profiles were tested:
1. OLTP1 (mail application)
2. OLTP2 (small oracle application / database transactional)
3. OLTP2-HW (large oracle application / heavy-weight transactions)
These simulated application profiles are each a composite of five simple I/O profiles:
Random Read Hits (rrh), Random Reads (Miss) (rr), Random Writes (rw), Sequential
Reads (sr) and Sequential Writes (sw). The I/O size and proportion of each component
I/O profile varies across the application profiles as detailed in the following tables.
EMC VPLEX WITH XTREMIO 2.4
16
Overall VPLEX-XtremIO Testing Observations
Before diving into each of the test results there are a few key observations to bear in mind
with the VPLEX-XtremIO solution:
• Application workloads that have a substantial random hit (rrh) component typically
benefit from VPLEX cache in front of XtremIO.
The observed response times
(latency) and IOPS are in line or improve upon the native XtremIO results. There is a
higher throughput (IOPS) limit from VPLEX cache than from XtremIO.
•
High read miss workloads running on VPLEX local incur additional latency (200-600
microseconds) on each read operation compared to native XtremIO. If the planned
workload is small block high % read miss then it’s important to model the workload
using EMC’s Business Continuity System Design (BCSD) tool. BCSD is discussed in
Section 4: Sizing VPLEX-XtremIO Solutions.
•
Applications that are predominantly small block writes, large block read, or large block
do equally well in terms max IOPS on native XtremIO or VPLEX + XtremIO solutions.
•
With VPLEX using local Raid-1 (which utilized a second X-Brick) devices, a doubling
of read bandwidth was achieved. This highlights a potential benefit of a VPLEXXtremIO solution.
Simulated Application Test Results
For the set of simulated applications test results shown below, VPLEX had the expected impact on
IO latency throughout the IOPS range. Actual IOPS have intentionally been excluded to avoid
competitive comparisons. Often these comparisons ignore key environmental, test, or multi-pathing
conditions and can lead to improper conclusions about the appropriateness of one solution over
another.
OLTP1
Response TIme (ms)
3
XtremIO
VPLEX
2.5
2
1.5
1
0.5
0
IOPS
Figure 6: OLTP1 Simulated Application Workload Latency
EMC VPLEX WITH XTREMIO 2.4
17
OLTP2
Response TIme (ms)
3
XtremIO
VPLEX
2.5
2
1.5
1
0.5
0
IOPS
Figure 7: OLTP2 Simulated Application Workload Latency
OLTP2HW
Response TIme (ms)
3
XtremIO
VPLEX
2.5
2
1.5
1
0.5
0
IOPS
Figure 8: OLTP2HW Simulated Workload Latency
Latency by Offered Workload
To examine latency throughout the I/O range an offered workload of varying sizes was used
to generate specific IOPS. For each offered workload (IOPS) level the overall latency was
measured and then plotted. IOPS and MB ranges are intentionally restricted so that
EMC VPLEX WITH XTREMIO 2.4
18
comparisons can be made across both the XtremIO and the VPLEX + XtremIO solution
latencies. The following 8 charts shows I/O latency graphs for a range of throughput and
bandwidth values for 4 different I/O sizes for XtremIO and VPLEX + XtremIO configurations.
4KB Random Read
Response TIme (ms)
3
XtremIO
VPLEX
2.5
2
1.5
1
0.5
0
IOPS
Figure 9: 4KB Random Read Latency
4KB Random Write
Response TIme (ms)
3
XtremIO
VPLEX
2.5
2
1.5
1
0.5
0
IOPS
Figure 10: 4KB Random Write Latency
EMC VPLEX WITH XTREMIO 2.4
19
8KB Random Read
Response TIme (ms)
3
XtremIO
VPLEX
2.5
2
1.5
1
0.5
0
IOPS
Figure 11: 8KB Random Read Latency
8KB Random Write
Response TIme (ms)
3
XtremIO
VPLEX
2.5
2
1.5
1
0.5
0
IOPS
Figure 12: 8KB Random Write Latency
EMC VPLEX WITH XTREMIO 2.4
20
64KB Random Read
4
Response TIme (ms)
3.5
XtremIO
VPLEX
3
2.5
2
1.5
1
0.5
0
MBps
Figure 13: 64KB Random Read Latency
64KB Random Write
5
Response TIme (ms)
4.5
XtremIO
VPLEX
4
3.5
3
2.5
2
1.5
1
0.5
0
MBps
Figure 14: 64KB Random Write Latency
EMC VPLEX WITH XTREMIO 2.4
21
256KB Random Read
Response TIme (ms)
8
7
XtremIO
VPLEX
6
5
4
3
2
1
0
MBps
Figure 15: 256KB Random Read Latency
256KB Random Write
Response TIme (ms)
12
XtremIO
VPLEX
10
8
6
4
2
0
MBps
Figure 16: 256KB Random Write Latency
EMC VPLEX WITH XTREMIO 2.4
22
Native XtremIO vs. VPLEX Metro + XtremIO Testing and Results
VPLEX Metro with XtremIO write performance is highly dependent upon the WAN round-triptime latency (RTT latency). The general rule of thumb for Metro systems is host write IO
latency will be approximately 1x-3x the WAN round-trip-time. While some may view this as
overly negative impact, we would caution against this view and highlight the following points.
First, VPLEX Metro uses a synchronous cache model and therefore is subject to the laws of
physics when it comes to data replication. In order to provide a true active-active storage
presentation it is incumbent on VPLEX to provide a consistent and up to date view of data at
all times. Second, many workloads have a considerable read component, so the net WAN
latency impact can be masked by the improvements in the read latency provided by VPLEX
read cache. This is another reason that we recommend a thorough understanding of the real
application workload so as to ensure that any testing that is done is applicable to the
workload and environment you are attempting to validate.
VPLEX Metro’s patented distributed coherent cache technology is differentiated across the
entire IT storage segment including all AFA vendors on the market today. Therefore it is
unfair to compare VPLEX Metro write performance to any array that is not doing synchronous
replication. With XtremIO, there is currently no native synchronous remote replication option
or a native active/active datacenter option to compare to VPLEX Metro. Even though the
comparison is unfair it is valuable in that it highlights key benefits of the Metro-XtremIO
solution compared to the native XtremIO solution.
Figure 17: Impact of WAN RTT Latency on VPLEX Metro
Figure 9 illustrates the impact of WAN latency on VPLEX Metro. As WAN latency is added
there is a corresponding impact on write IO. The OLTP (green) lines show a simulated OLTP
application (8KB and 64 KB IO with roughly equal read and write IO) and the overall impact
of WAN latency with VPLEX Metro.
EMC VPLEX WITH XTREMIO 2.4
23
Offered Load RTT Comparison for Metro
From a latency perspective the test results below focus on a dual engine VPLEX Metro systems
with an RTT latency of 0 milliseconds and an RTT latency of 1 millisecond. It’s worth noting that
although the RTT latency is 0, there are still additional Fibre Channel and/or IP network hops
between both clusters, so a small uptick in overall write latency can be expected with Metro
compared to local RAID-1 write latency.
4KB Random Write
Response Time (ms)
6
5
4
XtremIO
VPLEX RT Delay 0 ms
VPLEX RT Delay 1 ms
3
2
1
0
IOPS
Figure 18: VPLEX Metro 4KB Write Latency @ 0 and 1 ms WAN RTT
8KB Random Write
Response Time (ms)
6
VPLEX RT Delay 1 ms
5
XtremIO
4
VPLEX RT Delay 0 ms
3
2
1
0
IOPS
Figure 19: VPLEX Metro 8KB Write Latency @ 0 and 1 ms WAN RTT
EMC VPLEX WITH XTREMIO 2.4
24
64KB Random Write
Response Time (ms)
8
7
6
XtremIO
VPLEX RT Delay 0 ms
VPLEX RT Delay 1 ms
5
4
3
2
1
0
MB/s
Figure 20: VPLEX Metro 64 KB Write Latency @ 0 and 1 ms WAN RTT
256K Random Write
Response Time (ms)
20
15
XtremIO
VPLEX RT Delay 0 ms
VPLEX RT Delay 1 ms
10
5
0
MB/s
Figure 21: VPLEX Metro 256KB Write Latency @ 0 and 1 ms WAN RTT
Backup or Write Intensive Application Considerations
Write throughput-intensive applications such as back-ups need to be aware of maximum
available WAN bandwidth between VPLEX clusters. If the write workload exceeds the WAN
link bandwidth, write response time will spike, and other applications may also see write
performance degradation.
EMC VPLEX WITH XTREMIO 2.4
25
Native vs. VPLEX Geo Performance
Given the fundamental architectural differences of VPLEX Geo from Local and Metro, namely
its write-back caching model and asynchronous data replication, it's even more difficult to
accurately compare native array performance to VPLEX Geo performance.
In short, VPLEX Geo performance will be limited to the available drain-rate, which is a
function of the available WAN bandwidth and XtremIO performance at each cluster. If a
VPLEX director's incoming host write rate exceeds the outgoing write rate can achieve,
inevitably there will be push back or throttling that occurs on the host, which will negatively
affect host per operation write latency causing it to rise. Ensure the WAN and arrays are
properly configured, and various VPLEX Geo related settings are tuned properly. See the
series of VPLEX technical notes entitled Implementation and Planning Best Practices Guides
available at https://support.emc.com
Performance Testing Summary and Recommendations
The correct VPLEX-XtremIO solution is greatly influenced by the target workload IO profile
and latency requirements.
Note: Under normal sizing conditions the expectation would be to know the workload up
front, size VPLEX and XtremIO for 50% or less of their peak capabilities and then select
the appropriate hardware configuration. Neither VPLEX nor XtremIO should be sized to
100% peak capability at initial deployment as there is no room for growth in terms of
performance or additional applications.
If specific IOPS numbers, latency values, performance levels are required EMC can provide
local resources with details and analysis for your specific environment via their vSPEED
specialist program. Contact your EMC account representative for more details.
EMC VPLEX WITH XTREMIO 2.4
26
Section 4: Sizing VPLEX-XtremIO Solutions
VPLEX Sizing Overview
Sizing a VPLEX-XtremIO solution is a straight forward process. The key is to understand the
applications and IO workload that will be placed onto the VPLEX and, consequentially, onto the
XtremIO array. By gathering the real world application IO profile, workload, and latency
requirements an accurate assessment can be made as to the configuration requirements of the
entire solution. As noted in earlier sections, a single engine VPLEX can very often fit the bill for a
single X-Brick XtremIO array, but there are workloads that VPLEX will not be able to get by with a
single engine to single X-Brick ratio.
EMC offers individual sizing tools and also professional services to provide the right level of sizing
capabilities for each customer environment.
The BCSD Tool
EMC’s Business Continuity Solution Designer (BCSD) is available to EMC customers, solutions
providers and partners. It enables users to analyze and design new and existing environments for
remote replication. By automating the design and sizing tasks, BCSD significantly reduces the time
spent on these tasks.
Users create projects with specific configurations and requirements, specify workloads based on
statistical performance data or user defined workload characteristics, and then analyze and model
that data. Key modeling results and charts are presented to the user for review and can optionally
generate reports.
For risk sensitive use cases, a risk analysis is performed by comparing the results against
engineering best practices, product limits and user defined requirements. This risk analysis enables
a user to determine if a proposed configuration and requirements are a low, medium or high risk
and whether or not they should change the configuration and requirements to reduce the risk.
What's New in BCSD Version 1.6.7
•
•
•
•
•
•
•
VPLEX 5.4 support
MetroPoint support (note the modeling considerations in the Getting Started Guide)
Special modeling for VPLEX logical relationship when Site-3 MetroPoint is selected
Target RecoverPoint utilization support
RecoverPoint PowerPoint based Analysis Summary updates
Restrict RecoverPoint Snap Based Replication to version 4.1
Replace Send Support Request with "Collect BCSD Logs and Project" and "Open Support
Request"
EMC Global Business Services -- VPLEX “Sizing as a Service”
The GBS Storage Sizing Team uses customer performance data and desired objectives to assess,
analyze, and create a customized presentation containing a properly sized VPLEX solution,
alternatives and supporting information. GBS Services are designed to help customers and EMC
EMC VPLEX WITH XTREMIO 2.4
27
solution teams spend more time on their day to day business objectives by offloading repetitive or
complex tasks from them. GBS offers sizing for both VPLEX Local and VPLEX Metro solutions.
The features and benefits of the Pre-Sales VPLEX Sizing service are:
• Shortened design cycles by putting data driven recommendations in front of the customer
o Answers all questions related to required bandwidth and engine utilization.
• Increased overall solution satisfaction with data-driven recommendations:
o Based on EMC best practices processed through Engineering tools
o Bandwidth recommendations if network circuit needs to be purchased
o Validate customer bandwidth already in place is sufficient for VPLEX
o Validate the customer's requested engine count meets utilization best practices for
both current and growth solutions and recommend alternatives when they do not
o Provide engine count recommendations when count is unknown
Contact your local EMC Sales Office or EMC Service Representative for details on this and other
service offerings.
Sizing Steps
The process of sizing your solution is as follows:
1. Gather application performance requirements using host and existing storage array data. The
more data and higher sampling frequency of data will help ensure the best overall fit of the final
solution.
2. Use the VPLEX BCSD sizing tool to determine the correct number of VPLEX engines for your
solution
3. Select the correct XtremIO X-Brick configuration for your workload (1,2, or 4 X-Bricks).
4. Follow the EMC install best practice guides for VPLEX and for XtremIO.
5. Test application performance prior to turning it over to production.
EMC VPLEX WITH XTREMIO 2.4
28
Section 5: VPLEX Performance Checklist
This section summarizes the topics that have been covered in the previous 6 sections and
provides a quick review of the overall performance considerations topics. Here is a checklist
of factors to consider when deploying VPLEX:
Ensure VPLEX Best Practices have been followed
The best way to ensure the optimal operation of your VPLEX is to adhere to the VPLEX
configuration best practices. The best practices for VPLEX are documented in a series of
technical notes entitled VPLEX Implementation and Planning Best Practices Guides and are
available at https://support.emc.com These guides provide targeted information regarding
key considerations, limitations, and architecture details for VPLEX design.
Run the latest GeoSynchrony code
VPLEX bug fixes and general performance enhancements are being continually released.
Run the latest available stable version. Read the release notes for a release so you know
what's coming, and for known issues.
Check ETA and Primus articles
Follow and understand all VPLEX-related EMC Technical Advisories (ETAs) and
performance related Primus articles. An ETA identifies an issue that may cause serious
negative impact to a production environment. EMC's technical support organization proactively publishes this information on Powerlink.emc.com.
Load balance across VPLEX directors and fibre channel ports
Avoid overloading of any one particular VPLEX director or pairs of directors (with dual and
quad systems). The same goes for VPLEX front-end ports - spread the IO workload around.
Avoid creating hot spots which can cause artificial performance bottlenecks.
Separate IO sensitive workloads
When creating VPLEX front-end storage-views, isolate applications where possible onto
different physical VPLEX resources. Be sure to spread the workload across available frontend FC ports on a VPLEX IO module, up to 4 available per director on VS2 hardware. The
same is true for back-end FC (storage array) port consumption. When practical, use backend ports in a rotating fashion so that all four BE ports are consumed by various storage
arrays, before re-using the same BE port for another array or arrays.
Competing latency sensitive applications sharing a single FC port (FE or BE) may impact
performance if they share the same FC ports.
Check System Status
Be very aware of the general health of your VPLEX system, your storage-arrays, storage
fabrics, and, for Metro/Geo, the health state and your WAN infrastructure.
With the Unisphere for VPLEX GUI, pay particular attention to System Status and
Performance Dashboard Tabs. Keep an eye out component errors, front-end aborts, backend errors, WAN errors, dropped packets, and/or packet re-transmissions since these
EMC VPLEX WITH XTREMIO 2.4
29
indicate key portions of IO operations are failing and may have resulted in re-tried operations.
In turn, they affect the performance of your VPLEX system..
Configure Multi-pathing
Remember that there is no singular host multi-pathing policy for every IO scenarios.
Generally PowerPath’s Adaptive policy (default for VPLEX devices) is sufficient. Avoid
excessive multiple director multi-pathing, and in a Metro cross-connect environment, set hbas
to prefer the local paths. Depending on the specifics of your environment you may wish to try
using different policies to see which best suites the workload.
Front-end/host initiator port connectivity summary:
•
•
•
•
•
•
•
•
•
The front-end dual fabrics should have a minimum of two physical connections to
each director (required)
Each host should have at least two paths to a cluster (required)
Each host should have at least one path to an A director and one path to a B
director on each fabric for a total of four logical paths (required for NDU)
Hosts should be configured to a pair of directors to minimize cache transfers and
improve performance
At the extreme, performance benefits can be maximized if both directors use by a
host are on the same engine as cache transfers would happen via the internal CMI
bus within the engine chassis. In general, this is not a recommended best practice
when 2 or more engines are available.
Maximum availability for host connectivity is achieved by using hosts with multiple
host bus adapters and with zoning to all VPLEX directors. It‘s important to note,
however, that this would be analogous to zoning a single host to all storage ports on
an array. Though this sort of connectivity is technically possible and with highest
availability, from a cost per host, administrative complexity, overall performance,
and scalability perspective it would not be a practical design for every host in the
environment.
Each host should have redundant physical connections to the front-end dual fabrics
(required).
Each host should have fabric zoning that provides redundant access to each LUN
from a minimum of two directors on each fabric.
Four paths are required for NDU.
Note: The most comprehensive treatment of VPLEX best practices can be found in the
VPLEX Implementation and Planning Best Practices Technote which is located at
http://support.emc.com
Ensure your file-system is aligned
A properly aligned file-system is a performance best practice for every storage product from
every vendor in the marketplace.
• Windows Server 2008, VMware vSphere 5.0, and some more recent Linux
environments automatically align their disk partitions.
• When provisioning LUNs for older Windows and Linux operating systems that use a
63 SCSI block count header, the host file system needs to be aligned manually.
EMC recommends aligning the file system with a 1 MB offset.
EMC VPLEX WITH XTREMIO 2.4
30
Understand VPLEX transfer-size
During the initial synchronization / rebuilding of mirrored devices (both local and distributed)
and for device mobility activities VPLEX uses the transfer-size parameter to determine how
much of a source volume can be locked during the copy activity from the source device to the
target device (mirror legs). This value is 128KB by default which is ensures the least impact.
It is important to realize that 128KB is extremely conservative and if the goal is to see how
fast an initial sync, rebuild or mobility can be completed then the parameter can typically be
increased to at least 2MB without a noticeable impact on host IO. As with any type of
activities that involved heavy IO to back-end storage it is important to adjust this value
gradually to ensure the host, the array, and the infrastructure can tolerate the increased write
activity.
Transfer-size can be set up to a maximum value of 32MB for the fastest sync,
rebuild, or mobility activities.
Know your baseline performance
Often times in performance troubleshooting we need to know the native host to storage-array
performance is. It is very important to know baseline performance when adding VPLEX to an
existing environment.
There are circumstances when VPLEX may be a victim of
unsatisfactory storage-array performance as VPLEX performance is heavily dependent on
back end array performance. Baseline data makes it easier to determine if it was a problem
before or after VPLEX was added. You can always check the observed VPLEX front-end
and back-end latencies to confirm the overall net latency impact. By following your storagearray vendor's performance best practices, you will also maximize your observed VPLEX
performance.
One size does not fit all
EMC Global Services Pre-sales personnel have tools specifically designed to help size your
EMC VPLEX environment(s).
These tools provide:
1. A GUI to enter environment and workload details and quickly size a local or metro
solution.
2. A way to check proposed solution against the technical and performance
boundaries of VPLEX to help assess which VPLEX solution best meets the
environmental requirements.
3. An export of the "proposed" solutions to Excel to aid in comparing various "what-if"
scenarios.
4. Support for GeoSynchrony 5.0.1+ Local / Metro FC / Geo
If you have questions or concerns about the appropriate number of engines for your VPLEX
system, please contact your EMC account team.
EMC VPLEX WITH XTREMIO 2.4
31
Section 6: VPLEX + XtremIO Use Cases
Introduction
Organizations have rapidly adopted the EMC XtremIO all-flash scale-out enterprise storage array
for their most I/O-intensive workloads including databases and analytics, virtual desktop
infrastructures, and virtual server environments. XtremIO delivers new levels of real-world
performance, administrative ease, and advanced data services, making the all-flash array
competitive with spinning disk storage systems.
The need to protect these mission-critical workloads and make them continuously available to
prevent planned and unplanned outages or to replicate and recover to anywhere and to any point in
time is as great as ever.
Organizations are looking for ways to:
•
•
•
•
Move workloads non-disruptively from spinning disk to flash and then back again to
accommodate changes in class of service
Make application data continuously available even in the event of an array or full site failure
Replicate application data at any distance, to any array whether it be another flash array or
spinning disk
Rapidly restore replicated data back to any point in time
Active-active continuous availability and mobility plus advanced disaster recovery
VPLEX enables XtremIO All-Flash arrays to access the full data protection continuum by offering
continuous availability, mobility and disaster recovery. With VPLEX and XtremIO All-flash arrays
EMC customers can take advantage of active-active infrastructure and continuous operations for
mission-critical applications, including VMware, Oracle RAC, and SAP. In addition, VPLEX enables
non-disruptive data migrations from legacy or lower performance tiers of storage to XtremIO in
order to change class of service, to balance workloads, to achieve non-disruptive X-Brick
expansions, code upgrades, and technology refreshes.
Complementing the VPLEX and XtremIO solution, EMC RecoverPoint is an operational and
disaster recovery solution, providing concurrent local and remote replication with continuous data
protection with recovery to any point in time for mission-critical applications. Together, VPLEX with
XtremIO and RecoverPoint deliver MetroPoint topology, an industry-unique advanced continuous
availability with continuous disaster recovery configuration that provides continuous operations for
two data center sites, remote replication to a third site, and the ability to sustain a two-site failure
with only a single disaster recovery copy. With these solutions, EMC raises the bar by combining
the best of VPLEX Metro and RecoverPoint, offering the most resilient continuous availability and
protection in the industry.
EMC VPLEX WITH XTREMIO 2.4
32
Changing class of service non-disruptively
XtremIO X-Bricks service the most mission-critical applications with the highest I/O needs. As these
applications progress through their development and production lifecycles, the ability to right-source
the application performance, protection, and capacity needs to the appropriate class of storage
service becomes paramount.
When XtremIO volumes are presented by VPLEX, application data can be non- disruptively
migrated from XtremIO X-Bricks to spinning disk arrays and back again, even during production.
This allows a workload that has a periodic need for increased I/O performance, such as data
analytics at the end of the quarter, to be migrated from spinning disk arrays to XtremIO. When the
need for high performance is over, the application data can be returned to its original disk (or
anywhere else in the VPLEX environment) non-disruptively.
Zero downtime, zero data loss
XtremIO workloads are frequently the kinds of applications that can tolerate very little or no
downtime or data loss because of their business critical nature. VPLEX can mirror application data
across two arrays in a single data center or between two data centers across synchronous
distance, creating an active-active mirror of the application data. If one side of the mirror fails, host
I/O requests continue to be serviced by the remaining side. This is true for a component failure, an
array failure or even a full site failure, resulting in truly zero RPO and RTO for application data.
Continuous data protection and remote replication
When continuous availability isn’t enough and application data needs to be replicated to a third site
for protection from operational failures and disasters, RecoverPoint can be used to replicate from
XtremIO to any other supported array synchronously or asynchronously anywhere in the world.
Because RecoverPoint journals every individual write stored by VPLEX and XtremIO and
automatically replicates that data remotely, recovery to any point in time allows for unparalleled
levels of operational and disaster recovery.
EMC VPLEX WITH XTREMIO 2.4
33
Section 7: Benchmarking
Tips when running the benchmarks
There are four important guidelines to running benchmarks properly:
1) Ensure that every benchmark run is well understood. Pay careful attention to the
benchmark parameters chosen, and the underlying test system’s configuration and
settings.
2) Each test should be run several times to ensure accuracy, and standard deviation or
confidence levels should be used to determine the appropriate number of runs.
3) Tests should be run for a long enough period of time, so that the system is in a steady
state for a majority of the run. This means most likely at least tens of minutes for a single
test. A test that only runs for 10 seconds or less is not sufficient.
4) The benchmarking process should be automated using scripts to avoid mistakes
associated with manual repetitive tasks. Proper benchmarking is an iterative process.
Inevitably you will run into unexpected, anomalous, or just interesting results. To explain
these results, you often need to change configuration parameters or measure additional
quantities - necessitating additional iterations of your benchmark. It pays upfront to
automate the process as best as possible from start to finish.
Take a scientific approach when testing
Before starting any systems performance testing or benchmarking, here are some best
practices:






First things first, define your benchmark objectives. You need success metrics so you
know that you have succeeded. They can be response times, transaction rates, users,
anything — as long as they are something.
Document your hardware/software architecture. Include device names and specifications
for systems, network, storage, applications. It is considered good scientific practice to
provide enough information for others to validate your results.
This is an important requirement if you find the need to engage EMC Support on
benchmarking environments.
When practical, implement just one change variable at a time.
Keep a change log. What tests were run? What changes were made? What were the
results? What were your conclusions for that specific test?
Map your tests to what performance reports you based your conclusions on. Sometimes
using codes or special syntax when you name your reports helps.
EMC VPLEX WITH XTREMIO 2.4
34
Typical Benchmarking Mistakes
Testing peak device performance with one outstanding IO
A storage system cannot possibly be peak tested when it is not fully utilized. There is a lot of
waiting by the storage device for single outstanding IOs.
Performance testing on shared infrastructure or multiple user system
A shared resource cannot and should not be used for performance testing. Doing so calls
into question the performance results gathered since it's anyone's guess who happened to be
doing what on the system at the same time as the test. Ensure that the entire system solution
(host, storage appliance, network, and storage-array) is completely isolated from outside
interference.
Do not conduct performance testing in a production system since benchmark generated IO
workload could affect production users.
Comparing different storage devices consecutively without clearing host server
cache
Caching of data from a different performance run could require host server cache to flush out
dirty data for the previous test run. This would surely affect the current run.
Better yet, run performance tests to storage devices on the host in raw or direct mode,
completely by-passing host cache.
Testing where the data set is so small the benchmark rarely goes beyond cache
Be aware of the various levels of caching throughout the system stack - server, storage
appliance (VPLEX, IBM SVC, other), and the storage-array. Choose a sufficient working set
size and run the test for long enough of a time to negate significant caching effects. It’s also
import to ensure that too large of a working size is not used either. Too large of a working set
could completely negate the benefits of storage engine and array cache and not represent
real world application performance.
It is not always clear if benchmarks should be run with "warm" or "cold" caches. On one
hand, real systems do not generally run with a completely cold cache, and many applications
benefit greatly from various caching algorithms so to run a benchmarking completely
eliminating the cache wouldn't be right. On the other hand, a benchmark that accesses too
much cached data may be unrealistic as well since the I/O requests may not even hit the
desired component under test.
Inconsistent cache states between tests
Not bringing the various system caches back to a consistent state between runs can cause
timing inconsistencies. Clearing the caches between test runs will help create identical runs,
thus ensuring more stable results. If, however, warm cache results are desired, this can be
achieved by running the experiment n+1 times, and discarding the first run's result.
EMC VPLEX WITH XTREMIO 2.4
35
Testing storage performance with file copy commands
Simple file copy commands are typically single threaded and result in single outstanding I/O’s
which is poor for performance and does not reflect normal usage.
Testing peak bandwidth of storage with a bandwidth-limited host peripheral slot
If your server happens to be an older model, you could be host motherboard PCI bus limited.
Ensure you have sufficient host hardware resources (CPU, memory, bus, HBA or CNA cards,
etc.) An older model fibre channel network (2Gbps as an example) may performance limit
newer servers.
Forgetting to monitor processor utilization during testing
Similar to peak bandwidth limitations on hosts, ensure that your host server isn't completely
used up. If this is happens, your storage performance is bound to be limited.
Same goes for the storage virtualization appliance, and the storage-array. If you are maxing
out the available CPU resources in the storage device you will be performance limited.
Not catching performance bottlenecks
Performance bottlenecks have the potential to occur at each and every stack in the I/O layer
between the application and the data resting on flash or spinning media.
Of course ultimately the performance that the application sees relies upon all of the subcomponents situated between it and the storage, but it's critical to understand in which layer
of this cake the performance limitations may exist. One misbehaving layer can spoil
everything.
Performance testing with artificial setups
Avoid "performance specials". Test with a system configuration that is similar to your
production target.
VMWare vSphere - Performance testing directly on the ESXi hypervisor console
Don't do it. Ever. ESXi explicitly throttles the performance of the console to prevent a
console app from killing VM performance. Also, doing I/O from the console directly to a file
on VMFS results in excessive metadata operations (SCSI reservations) that otherwise would
not be present when running a similar performance test from a VM.
Understand the Metamorphosis of an IO
Here's why you typically don't want to use a high level benchmarking program for storage
testing. What you think the storage device sees might be something completely different.
Each software layer within the host may be transparently segmenting, rearranging, and
piecing back together the initial I/O request from the application.
EMC VPLEX WITH XTREMIO 2.4
36
Figure 22: Host to VPLEX I/O Stack
VPLEX Performance Benchmarking Guidelines
There are a few themes to mention with regards to performance benchmarking with VPLEX.
Test with multiple volumes
VPLEX performance benefits from I/O concurrency. Concurrency can easily and is best
achieved by running I/O to multiple virtual-volumes. Testing with only one volume does not
fully utilize VPLEX or the storage-array full performance capabilities. With regard to the
previously mentioned single outstanding I/O issue, with enough volumes active (such as a
few hundred) having a single outstanding I/O per volume is acceptable. Multiple volumes
active creates a decent level of concurrency. A single volume with single outstanding I/O
most definitely does not.
Storage Arrays
For VPLEX Metro configurations, ensure that each cluster's storage-arrays are of equal
class. Check the VPLEX back-end storage-volume read and write latency for discrepancies.
Perform a local-device benchmarking test at each cluster, if possible, to eliminate the WAN
and remote storage-array from the equation.
One Small Step
Walk before you run. It is typically quite exciting to test the full virtualization solution end to
end, soup to nuts. For VPLEX, this may not always be the most scientific approach when
problems arise. By testing end to end immediately the test results may disappoint (due to
unrealistic expectations) and lead to the false conclusions about the overall solution without
understanding the individual pieces of the puzzle.
EMC VPLEX WITH XTREMIO 2.4
37
Take a moderated staged approach to system testing:
1) Start with your native performance test:
Host <-> storage-array
1a) If you have a two cluster deployment in mind, it is important to quantify the performance
of the storage-arrays at each cluster.
This will be your baseline to compare to VPLEX. For certain workloads, VPLEX can only
perform as well as the underlying storage-array.
2) Encapsulate the identical or similar performing volumes to VPLEX configuring them
as local-devices:
Host <-> VPLEX <-> storage-array
2a) Test both cluster's local-device performance. (Note: The second cluster's VPLEX localdevice performance test could be skipped if Step 1 showed satisfactory performance native
performance on the second cluster.)
3) Create a VPLEX distributed-device spanning both clusters’ storage-arrays.
Host <-> VPLEX <-> cluster-1 storage-array and cluster-2 storage-array (distributed-device)
EMC VPLEX WITH XTREMIO 2.4
38
Tip: Any significant performance degradations at this step should focus troubleshooting
efforts at the WAN.
Be cognizant of inherent write latency increases by writing to a distributed-device for VPLEX
Metro.
VPLEX Geo's write-back caching model will initially allow VPLEX to absorb host writes,
however over-time the performance will be limited by the system's sustained drain-rate (WAN
performance and storage-array performance.)
IOMeter Example
This section provides IOMeter settings examples in the form of screen captures from actual
test systems. They illustrate the setting that can be used to simulate various workloads and
create benchmarks.
Disk Targets Tab:
Access Specification Tab:
EMC VPLEX WITH XTREMIO 2.4
39
Application Simulation Testing with IOMeter
IOMeter can be used to synthesize a simplistic application I/O workload. Alignment should
be set to a page size of 4KB or 8KB.
Single threaded I/O:
EMC VPLEX WITH XTREMIO 2.4
40
Multi-threaded I/O:
Simulating Database Environments:
Use small transfer requests.
Mostly random distribution.
Match your existing database application read/write mix.



EMC VPLEX WITH XTREMIO 2.4
41
Simulating Data Streaming Environments:
 Use large transfer requests to generate peak bandwidth.
 Mostly sequential distribution.
 Match your existing streaming application read/write mix.
EMC VPLEX WITH XTREMIO 2.4
42
Conclusion
This paper focused on VPLEX Local and Metro with XtremIO AFA. Because VPLEX lives at the
very heart of the storage area network, VPLEX’s primary design principals are continuous
availability and minimized IO latency. When combined with XtremIO this combination of capabilities
combined with sub millisecond IO response times make for a differentiated and very high
performing storage solution. When VPLEX with XtremIO are properly sized and configured latency
can be reduced in the case of read skewed workloads and kept nearly neutral for write biased
workloads. Individual results will, of course, vary based on the application and IO workload.
We’ve learned how inserting an inline virtualization engine like VPLEX in front of XtremIO has the
potential to increase I/O latency and limit peak IOPS for certain application workloads and profiles.
In addition, test results showed how writes at metro distances react in a synchronous caching
model. The read/write mix, the I/O size, and the I/O stream characteristics can affect the overall
result. If benchmark or proof of concept testing is being done, it is important to understand the
factors that impact VPLEX performance and make every effort to ensure the benchmark test
workload is as close to the real world workload as possible.
The role of SAN, server and storage capabilities in terms of congestion, reads and writes was
another important topic of discussion. These external components are extremely relevant in
determining overall VPLEX performance results. We’ve discussed how VPLEX’s read cache may
increase the level of performance compared to baseline for a native XtremIO and how each host
write must be acknowledge by the back-end storage frames. Understanding the impact of VPLEX
and how an environment can be prepared for single, dual or quad VPLEX clusters will greatly
increase the chances of success when configuring virtualized storage environments for testing,
benchmarks, and production.
EMC VPLEX WITH XTREMIO 2.4
43
References
•
XtremIO web site http://www.emc.com/storage/xtremio/index.htm
•
IDC Technology Assessment – All-Flash Array Performance Testing Framework
http://idcdocserv.com/241856
•
VDbench http://www.oracle.com/technetwork/server-storage/vdbench-downloads1901681.html
•
FIO http://freecode.com/projects/fio
•
IOmeter http://sourceforge.net/projects/iometer/files/iometer-devel/1.1.0-rc1/
•
IOMeter screen captures and Product Testing discussion:
http://www.snia.org/sites/default/education/tutorials/2007/spring/storage/Storage_Per
formance_Testing.pdf
•
BTEST http://sourceforge.net/projects/btest/
•
VMware VAAI Knowledge Base Article
http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&externalId=
1021976
•
White Paper: Workload Resiliency with EMC VPLEX
•
Techbook: EMC VPLEX Architecture and Deployment: Enabling the Journey to the
Private Cloud
•
VPLEX 5.3 Administrators Guide
•
VPLEX 5.3 Configuration Guide
•
VPLEX Procedure Generator
•
EMC VPLEX HA Techbook
EMC VPLEX WITH XTREMIO 2.4
44
Appendix A: Terminology
Term
Definition
Storage volume
LUN or unit of storage presented by the back-end arrays
Metadata volume
System volume that contains metadata about the
devices, virtual volumes, and cluster configuration
Extent
All or part of a storage volume
Device
Protection scheme applied to an extent or group of
extents
Virtual volume
Unit of storage presented by the VPLEX front-end ports
to hosts
Front-end port
Director port connected to host initiators (acts as a
target)
Back-end port
Director port connected to storage arrays (acts as an
initiator)
Director
The central processing and intelligence of the VPLEX
solution. There are redundant (A and B) directors in
each VPLEX Engine
Engine
Consists of two directors and is the unit of scale for the
VPLEX solution
VPLEX cluster
A collection of VPLEX engines in one rack, using
redundant, private Fibre Channel connections as the
cluster interconnect
VPLEX Metro
A cooperative set of two VPLEX clusters, each serving
their own storage domain over Synchronous distance
VPLEX Metro HA
As per VPLEX Metro, but configured with VPLEX
Witness to provide fully automatic recovery from the
loss of any failure domain.
Access Anywhere
The term used to describe a distributed volume using
VPLEX Metro
Federation
The cooperation of storage elements at a peer level
EMC VPLEX WITH XTREMIO 2.4
45
Fly UP