...

Document 1288939

by user

on
Category: Documents
60

views

Report

Comments

Transcript

Document 1288939
IBM WebSphere Developer
Technical Journal
Issue 13.7 : October 6, 2010
From the editor
This issue of the IBM® WebSphere® Developer Technical Journal provides tips and advice
for improved productivity, performance, and planning on several different levels. Improve
development productivity with tips to help you work smarter with IBM WebSphere Integration
Developer. Improve application server performance with the right workload management
approach. Improve IT planning by understanding how to properly introduce and adopt new
technologies into your environment. The theme continues throughout this month's articles. Read
on for much more.
Featured articles: Learn about the Java™ EE support provided by the WebSphere Application
Server Feature Pack for SCA, find out whether a proxy server or the HTTP plug-in is the better
workload management option for you, and get organized and more efficient with how you use
WebSphere Intgeration Developer.
Columns: The latest Comment lines explain why you need to treat new technology as new
technology, and how integrating WebSphere Service Registry and Repository with IBM Tivoli®
Application Dependency Discovery Manager can give you the view you always wanted. Also,
Innovations within reach answers FAQs about the new IBM WebSphere DataPower® XC10
Appliance, The WebSphere Contrarian makes change easy once more, and The Support
Authority describes how to set up WebSphere Application Server to run as a Windows® service.
Your required reading begins below...
Featured articles
Proxy server versus the HTTP plug-in:
Choosing the best WebSphere Application Server workload management option
by John Pape and Robert Westland
Since IBM® WebSphere® Application Server Version 6.0, the WebSphere proxy server has
been available to provide intelligent routing of HTTP requests based on configured routing
rules and performance metrics. Although not as smart as the on demand router component of
IBM WebSphere Virtual Enterprise, the proxy server can provide services above and beyond
the traditional WebSphere HTTP plug-in implementation seen in practically all IHS-fronted
WebSphere Application Server clusters. This article compares these solutions in detail so you
can determine the best choice for your requirements.
Tips for working smarter and increasing productivity with WebSphere Integration
Developer
by Diana Lau
Dealing with a high number of modules, artifacts, and projects at once in IBM WebSphere
Integration Developer can be overwhelming at times. However, there are steps you can take
and features you can leverage that can help you not only become better organized, but also
help you improve build time and increase your productivity as well. This article provides tips
to help you make this happen.
Exploring the WebSphere Application Server Feature Pack for SCA:
Java EE support in the Feature Pack for SCA
by Anbumunee Ponniah, Chao M Beck and Vijai Kalathur
This article continues an earlier series on the functionality provided by the WebSphere
Application Server V7 Feature Pack for Service Component Architecture (SCA). This latest
installment describes the integration of Java™ EE support in the SCA feature pack and the
benefits that are realized with this integration. The feature pack supports the use of Java EE
archives as SCA component implementations, the consumption of SCA exposed services from
Java EE components, and the exposure of stateless session EJB services as SCA services with
support for rewiring those services. These and other features can help Java EE programmers
and architects transcend differences in implementation technologies and leverage an SCA
architecture with little or no changes to their existing code.
The Support Authority
Running WebSphere Application Server as a Windows service
by Alain Del Valle and Dr. Mahesh Rathi
This article will help a domain administrator set up IBM WebSphere Application Server to run
as a Windows service under a domain user account. The process involves the domain
administrator logging in to the local machine and providing the correct rights for the domain
user. The steps to do this are provided along with an example.
Innovations within reach
There's a new purple appliance in town
Frequently asked questions about the WebSphere DataPower XC10 elastic caching
solution
by Charles Le Vay
The IBM WebSphere DataPower XC10 Appliance is a quick, easy, and cost-effective way to
add an elastic data caching tier to enhance your application infrastructure. To help introduce
you to the capabilities of this new appliance, which combines the robust DataPower hardware
appliance platform with IBM's state of the art distributed caching technology, here are the top
ten frequently asked questions about this new product.
The WebSphere Contrarian
Change is hard, or is it?
by Tom Alcott
Changing the LDAP bind password in IBM WebSphere Application Server doesn't have
to be complex and mandate an outage or interruption of service. The WebSphere
Contrarian discusses a simple pattern that can be employed to change the LDAP bind
password used by WebSphere Application Server in a simple and easy way.
Comment lines
The challenges of introducing new technology
by Andre Tost
"Companies have to deal with new products and new patterns of solution design, new
requirements towards the maintenance and operation of business solutions, and new
opportunities for directly supporting the business needs in IT. However, most
organizations try to address these challenges with their existing roles, responsibilities,
and processes. I want to describe a common issue that I have come to identify as "the
challenge of introducing new technology" and look at the best way an organization can
deal with this challenge..."
Integrating WebSphere Service Registry and Repository with Tivoli
Application Dependency Discovery Manager?
by Robert Peterson
"If you are using IBM WebSphere Service Registry and Repository, chances are that
you're integrating it with other IBM WebSphere products, such as IBM WebSphere
Message Broker or IBM WebSphere DataPower. But did you know that you can also
integrate Service Registry and Repository with several IBM Tivoli products as well? For
example, you can export metadata about WSDL services from Service Registry and
Repository and then load that metadata into IBM Tivoli Application Dependency
Discovery Manager (TADDM). With information on Service Registry and Repository
Web services in TADDM, an administrator can have a holistic view of all the Web
services and policies active in their IT environments from one place..."
Proxy server versus the HTTP plug-in: Choosing
the best WebSphere Application Server workload
management option
Skill Level: Intermediate
John Pape
WebSphere Application Server SWAT Team
IBM
Robert Westland ([email protected])
WebSphere Application Server WLM Architect
IBM
06 Oct 2010
Since IBM® WebSphere® Application Server Version 6.0, the WebSphere proxy
server has been available to provide intelligent routing of HTTP requests based on
configured routing rules and performance metrics. Although not as smart as the
on-demand router component of IBM WebSphere Virtual Enterprise, the proxy server
can provide services above-and-beyond the traditional WebSphere HTTP plug-in
implementation seen in practically all IHS-fronted WebSphere Application Server
clusters. This article compares these solutions so you can make determine the best
choice for your requirements.
Introduction
In clustered IBM® WebSphere® Application Server environments, HTTP and
Session Initiation Protocol (SIP) requests are typically load-balanced through a
combination of network layer devices and one or more HTTP server processes
augmented with the WebSphere HTTP plug-in module. This setup is great for load
balancing these requests to back end application servers, as well as dealing with
fault tolerance in the case of server failure. But there is at least one other way to do
this.
Choosing the best WebSphere Application Server workload management option
© Copyright IBM Corporation 2010. All rights reserved.
Page 1 of 20
developerWorks®
ibm.com/developerWorks
This article looks at the traditional approach to load balancing Web requests to a
WebSphere Application Server cluster, and then examines an alternative method
using the WebSphere proxy server. This article will provide a high level overview of
each method and compare them so you can make informed decisions on which is
better for your applications.
Using the HTTP plug-in
If you have a WebSphere Application Server cluster deployed into a production
environment, then chances are good that you have a set of HTTP server instances
placed upstream from your cluster that are outfitted with the WebSphere HTTP
plug-in. This configuration provides load balancing and simple failover support for
the applications that are deployed to the cluster. The HTTP plug-in module is loaded
into an HTTP server instance and examines incoming requests to determine if the
request needs to be routed to a WebSphere Application Server. The plug-in
examines the host and port combination along with the requested URL and
determines whether or not to send the request on. If the request is to be serviced by
WebSphere Application Server, the plug-in copies pertinent information (such as
headers from the original request) and creates a new HTTP request that is sent to
the application server. Once a response is given from the application server, the
plug-in then matches up the response with the original request from the client and
passes the data back. (It’s actually much more complicated than that under the
covers, but this level of detail is sufficient for this discussion.)
Failover support is also a crucial consideration for a WebSphere Application Server
cluster. When a specific server fails, the HTTP plug-in will detect the failure, mark
the server unavailable, and route requests to the other available cluster members.
Figure 1 shows a picture of a sample topology that uses the WebSphere HTTP
plug-in.
Figure 1. Sample topology with the HTTP plug-in in use
Choosing the best WebSphere Application Server workload management option
Page 2 of 20
© Copyright IBM Corporation 2010. All rights reserved.
ibm.com/developerWorks
developerWorks®
If all you want to do is route requests using simple load balancing (such as
round-robin), then the plug-in will work for you. But what if you want to setup more
complex routing? What if you want to direct traffic to one cluster during the day and
another during the night? What if you want your routing to be sensitive to the amount
of load on a certain server? These things are possible when you replace the HTTP
plug-in with a WebSphere Application Server proxy server instance.
Using a proxy server
The WebSphere proxy server was introduced in WebSphere Application Server
Network Deployment V6.0.2. The purpose of this server instance is to act as a
surrogate that can route requests to back end server clusters using routing rules and
load balancing schemes. Both an HTTP server configured with the HTTP plug-in and
the WebSphere Application Server proxy server can be used to load balance
requests being serviced by WebSphere application servers, clusters, or Web
servers. Both are also used to improve performance and throughput by providing
services such as workload management and caching Web content to offload back
Choosing the best WebSphere Application Server workload management option
© Copyright IBM Corporation 2010. All rights reserved.
Page 3 of 20
developerWorks®
ibm.com/developerWorks
end server work. Additionally, the proxy server can secure the transport channels by
using Secure Socket Layer (SSL) as well as implementing various authentication
and authorization schemes. The load balancing features provided by the proxy
server are similar in nature to the HTTP plug-in.
The proxy server, however, has custom routing rules that the HTTP plug-in does not,
plus significant advantages in terms of usability, performance, and systems
management. In WebSphere Application Server V6.1, the proxy server became a
first class server; it is created, configured, and managed from the deployment
manager either using the console or the wsadmin commands. It uses the HA
manager, on demand configuration (ODC), and a unified clustering framework (UCF)
to automatically receive configuration and run time changes to the WebSphere
Application Server topology.
With the release of WebSphere Application Server V7.0, a new type of WebSphere
proxy server instance is available: the DMZ secure proxy. This server is similar in
form factor to the original proxy server except it is better suited for deployment in the
demilitarized zone (DMZ) areas of the network.
Figure 2 shows a typical proxy server topology.
Figure 2. Sample topology with the proxy server installed
Choosing the best WebSphere Application Server workload management option
Page 4 of 20
© Copyright IBM Corporation 2010. All rights reserved.
ibm.com/developerWorks
developerWorks®
You will notice that Figure 2 is very similar to Figure 1, which shows how proxy
server instances can easily replace the HTTP servers in the topology. A new
WebSphere Application Server custom profile can be created on these hosts, and be
federated back to the deployment manager for cell 1, creating proxy servers on each
node. Also notice the new shapes added in the diagram for the core group bridge,
on demand configuration (ODC), and unified clustering framework (UCF) elements,
which shows the tight integration with WebSphere Application Server. These
elements are components inside of the cell and, together with the HA manager,
provide the proxy server with the run time and configuration information needed to
support load balancing and failover.
A big strength of the proxy server is its ability to utilize routing rules that are
configured by the WebSphere Application Server administrator. Routing rules are
bits of configuration that can be applied to the proxy that enable routing inbound
requests in any manner desired. Aside from routing rules, proxies provide other
capabilities, including:
Choosing the best WebSphere Application Server workload management option
© Copyright IBM Corporation 2010. All rights reserved.
Page 5 of 20
developerWorks®
ibm.com/developerWorks
• Content caching (both dynamic and static).
• Customizable error pages.
• Advanced workload management
• Performance advisors that can be used to determine application
availability.
• Workload feedback, which is used to route work away from busy
servers.
• Customizable URL rewriting.
• Denial-of-service protection.
The HTTP plug-in also provides caching of both static and dynamic content but does
not have the other advanced routing capabilities of the proxy server.
Comparing configurations
Looking at Figures 1 and 2 above, the HTTP server with plug-in and the proxy server
are positioned right in front of the application server tier and fit into a typical
multi-tiered topology in basically the same place. Both utilize WebSphere Application
Server and application server clusters (in the application tier) to provide deployed
applications with scalability, workload balancing, and high availability qualities of
service.
The next sections compare the similarities and differences between the HTTP server
and the WebSphere proxy server in the areas of their architecture, administration,
caching, load balancing, failover, routing behaviors, and routing transports.
Differences between the proxy server and the DMZ secure proxy server will also be
noted. At the end, Table 1 summarizes the major comparison points.
Architecture
• Proxy server
A WebSphere proxy server is a reverse caching proxy that is included in
WebSphere Application Server Network Deployment (hereafter referred to
as Network Deployment). The proxy server is basically a different type of
WebSphere application server that manages the request workload
received from clients and forwards them on to the application server that
is running applications. Because the proxy server is based on WebSphere
Application Server, it inherits these advantages:
• The proxy server can be dynamically informed of cluster
Choosing the best WebSphere Application Server workload management option
Page 6 of 20
© Copyright IBM Corporation 2010. All rights reserved.
ibm.com/developerWorks
developerWorks®
configuration, run time changes, and application information updates
by utilizing the built-in high availability infrastructure, unified clustering
framework, and on demand configuration.
• The proxy server can also use the transport channel framework,
which builds specific I/O management code per platform. Using this
framework enables the proxy to handle thousands of connections and
perform I/O operations very quickly.
The internal architecture of the proxy server was designed using a filter
framework and was implemented in Java™, which enables it to be easily
extended by WebSphere Application Server. Figure 3 shows the
high-level architecture of the proxy server in a Network Deployment
configuration.
Figure 3. Proxy server in a Network Deployment configuration
• HTTP plug-in
The HTTP plug-in integrates with an HTTP server to provide workload
management of client requests from the HTTP server to WebSphere
Application Servers. The plug-in determines which requests are to be
handled by the HTTP server and which are to be sent to WebSphere
Application Server servers. The plug-in uses a plugin-cfg.xml file that
Choosing the best WebSphere Application Server workload management option
© Copyright IBM Corporation 2010. All rights reserved.
Page 7 of 20
developerWorks®
ibm.com/developerWorks
contains application, server, and cluster configuration information used for
server selection. This file is generated on Network Deployment using the
administration console and copied to the appropriate directory of the
HTTP plug-in. When any new application is deployed or any server or
cluster configuration changes are made, the plugin-cfg.xml file must be
regenerated and redistributed to all HTTP servers.
Figure 4 shows the high-level architecture of the HTTP server with the
plug-in routing requests to Network Deployment application servers.
Figure 4. HTTP plug-in routing requests
• DMZ secure proxy server
New in WebSphere Application Server V7.0 is a proxy server that was
designed to be installed be in a DMZ, called the DMZ secure proxy
server. Its architecture is the same as a standard proxy server expect that
functions which are not needed or not available in the DMZ are removed.
There are three predefined default security levels for the server: low,
medium, and high. When configured using low security, the proxy
behaves and the cluster data is updated in the same manner as a
non-secure proxy. When running with medium security, it again behaves
the same as the standard proxy server, except that the cluster and
configuration information is updated via the open HTTP ports. When the
proxy is configured with the high security level, all routing information is
obtained "statically" from a generated targetTree.xml file, which contains
Choosing the best WebSphere Application Server workload management option
Page 8 of 20
© Copyright IBM Corporation 2010. All rights reserved.
ibm.com/developerWorks
developerWorks®
all the cluster and configuration information required for the proxy server
to determine where to route the HTTP request.
Figure 5 shows the high-level architecture of the DMZ secure proxy
server routing requests to Network Deployment application servers
Figure 5. DMZ secure proxy server routing requests to application
servers
Administration
• Proxy server
The proxy server is available in Network Deployment and is easily created
on any node in which WebSphere Application Server has been installed.
Because a proxy server is just a different type of WebSphere Application
Server, it is automatically integrated tightly with WebSphere Application
Server system management, and leverages the WebSphere Application
Server administration and management infrastructure. It is very simple to
use the administration commands in the console to create a proxy server,
and the proxy server is automatically configured as a reverse caching
proxy (see Resources). Additional configuration settings are available to
Choosing the best WebSphere Application Server workload management option
© Copyright IBM Corporation 2010. All rights reserved.
Page 9 of 20
developerWorks®
ibm.com/developerWorks
fine-tune the proxy server’s behavior to meet the needs of a particular
environment. These settings include options such as the number of
connections and requests to a server, caching, defining how error
responses are handled, and the location of the proxy logs. Setting the
proper configuration and enabling caching of static or dynamic content
can improve the overall performance of client requests.
For the most part, the proxy server setup and configuration is the same
for all WebSphere Application Server distributed platforms and for System
z. However, there is one limitation: on System z, you cannot deploy an
application that could be used to serve up a defined error page for various
errors.
Creating a cluster of proxy servers helps in the administration of multiple
proxy servers. It is easy to create a single proxy server, get it fully
configured the way you want, and to create the cluster based on the
configured proxy. Once the cluster has been created, it can be used to
easily add additional proxy servers, all configured exactly the same as the
original member. Having a cluster of proxy servers enables an external IP
sprayer or HTTP plug-in to spray requests to the proxy cluster to eliminate
single points of failure and to support load balancing.
• DMZ secure proxy server
WebSphere Application Server provides a separate installation package
to enable a proxy server to be installed into a DMZ. A DMZ proxy server
requires some additional configuration and setup because there is no
administrative console on the server itself. Rather, the administration of
the secure proxy server is handled with scripting or by using an
administrative agent. There is also support that requires the use of a
Network Deployment back end cell administrative console. A DMZ proxy
server profile can be created and configured, and then exported to the
secure proxy profile of the DMZ image. The profile created on the
Network Deployment cell is for configuration only and should not be used
for any other purpose. Only the secure proxy profile on the DMZ image is
fully operational.
To harden the security of the DMZ secure proxy server, these capabilities
are available:
• Startup user permissions.
• Routing consideration.
• Administration options.
• Error handling.
Choosing the best WebSphere Application Server workload management option
Page 10 of 20
© Copyright IBM Corporation 2010. All rights reserved.
ibm.com/developerWorks
developerWorks®
• Denial of service protection.
You select the security level you want from one of the three predefined
default values (low, medium and high) during proxy server creation. You
can customize various settings, but the resulting security level will
become the lowest level associated with any of the settings.
• HTTP plug-in
The HTTP plug-in is shipped with WebSphere Application Server as a
separately installed product and runs inside various HTTP servers to
replace the basic routing provided by the HTTP server.
You must install an HTTP server first, and then install the HTTP plug-in.
When the installation is completed, the plugin-cfg.xml file needs to be
created by the WebSphere Application Server deployment manager and
saved to the appropriate plug-in directory on the system where the plug-in
is installed.
As workload is sent to the HTTP server, the server uses information from
its configuration file to determine if the request should be handled by itself
or by the plug-in. If the plug-in is to handle the request, it uses the
information contained in a plugin-cfg.xml file to determine which back end
application server the request should be sent to. When configuration
changes occur, the plugin-cfg.xml file must be regenerated and replaced
in the plug-in directory. The HTTP plug-in automatically reloads the file at
a configured time interval; the default is every 60 seconds.
Since WebSphere Application Server V6.02, the HTTP plug-in can be
created to be one of two different types:
• A topology-centric file includes all applications within a cell and is
not managed by the administrative console. It is generated using the
GenPluginCfg commands and must be manually updated to change
any plug-in configuration properties.
• An application-centric file has a granularity that enables each
application to be mapped to its specific Web or application server,
and can be managed and updated using the administration console.
The HTTP plug-in has numerous configuration settings contained within
the plugin-cfg.xml file (see Resources).
The HTTP servers can also be configured as reverse caching proxies, but
additional configuration is required after installation to support this. This
type of configuration is typically used when you want clients to access
application servers behind a firewall.
Choosing the best WebSphere Application Server workload management option
© Copyright IBM Corporation 2010. All rights reserved.
Page 11 of 20
developerWorks®
ibm.com/developerWorks
Load balancing and failover
Both the HTTP plug-in and WebSphere proxy server support workload management
for load balancing and failover of client requests to application servers. Each has
some administration control of where and how the requests can be configured, with
more functionality available in the proxy server. However, there are some important
differences between the two.
• Cluster support
The main difference centers around how each gets access to cluster data.
The HTTP plug-in uses "static" configuration information obtained from
the plugin-cfg.xml file. The proxy server obtains cluster data dynamically
so that when run time changes are made, such as starting or stopping a
member, changing the weight, or member’s availability, the information is
updated in the proxy server at run time. Therefore, the proxy server is
able to use an application "runtime view" of the cluster during selection,
so only running members of the application are included, plus any run
time configuration settings that have been made.
The HTTP plug-in uses the cluster configuration information from the
plugin-cfg.xml file. This information is static and is not updated
dynamically during run time. It takes an administrative act to generate a
new plugin-cfg.xml file and make it available to the running HTTP plug-in.
Network Deployment does permit you to configure the Web server to
provide some support for automatically generating the plug-in
configuration file and propagating it to the Web server.
The proxy server also supports defining a generic server cluster. This is a
cluster that is configured to represent a group of resources whose
management falls outside the domain of WebSphere Application Server.
The cluster is configured and used by the proxy server to load balance
requests to these resources. Keep in mind that because these are not
WebSphere Application Server servers, the same qualities of service
available to a WebSphere Application Server cluster is not available.
The HTTP plug-in does not support the generic server clusters, however
you can manually edit the information in the plugin-cfg.xml file. This can
provide some benefits for generic servers but it most useful for merging
the plugin-cfg.xml files from two different cells so that a single HTTP
server can route to multiple WebSphere Application Server cells. You can
also group standalone servers or multiple cluster members into a
manually-configured cluster that is only known to the plug-in. You must
take extreme care when making any manual changes to the
plugin-cfg.xml file. The proxy server does not permit this type of editing of
cluster and routing information.
Choosing the best WebSphere Application Server workload management option
Page 12 of 20
© Copyright IBM Corporation 2010. All rights reserved.
ibm.com/developerWorks
developerWorks®
In a DMZ secure proxy server, the security level determines whether the
proxy uses dynamic or static cluster data. Using a low or medium security
level, the proxy server uses dynamic cluster data and basically behaves
as a non-DMZ proxy server. However, when running in a high security
level, the routing information is obtained from the taretTree.xml file and
the data is the static cluster configuration information. The taretTree.xml
file is generated on the cell(s) the proxy will be sending the HTTP
requests to. Any time the back end server and cluster configurations
change the targetTree.xml file must be regenerated and updated in the
secure proxy server.
• Routing and selection
Since WebSphere Application Server V7.0, the proxy server and HTTP
plug-in support two different routing algorithms:
• The random algorithm ignores the weights on the cluster members
and just selects a member at random.
• The weighted round-robin algorithm uses each cluster member’s
associated weight value and distributes members based on the set of
member weights for the cluster. For example, if all members in the
cluster have the same weight, the expected distribution for the cluster
would be that all members receive the same number of requests. If
the weights are not equal, the distribution mechanism will send more
requests to a member with a higher-weighted value than to one with a
lower weighted value. This provides a policy that ensures a desired
distribution, based on the weights assigned to the cluster members.
Valid weight values range from 0 to 20, with a default value of 2.
The proxy server selection also includes a client side outstanding request
feedback mechanism called blended weight feedback. This uses the
member weight information along with the member’s current observed
outstanding request information. This feedback provides a mechanism to
route work away from members that have more outstanding requests in
relationship to the other members in the cluster.
• Failover
Failover is provided in the event client requests can no longer be sent to a
particular cluster member. This can be caused by a variety of conditions,
such as being unable to establish a connection, or having the connection
prematurely closed by the cluster member. When these failures occur, the
proxy server and HTTP plug-in will mark the member as unavailable and
route the request to another member. Both support configuration
parameters that are used to fine-tune the detection and retry behavior.
These parameters include things like setting the length of time for
Choosing the best WebSphere Application Server workload management option
© Copyright IBM Corporation 2010. All rights reserved.
Page 13 of 20
developerWorks®
ibm.com/developerWorks
requests, connections, and server time-outs.
Aside from connection failures and time-outs, the HTTP plug-in uses the
maximum number of outstanding requests to a server to indicate when a
server is hanging with some sort of application problem. Keeping track of
outstanding requests and recognizing when the number of outstanding
requests become higher than a configured value can be acceptable
means of failure detection in many situations because application servers
are not expected to become blocked or hung.
There is one behavior difference that should be noted here: if a cluster
member is stopped while client requests are being sent, the plug-in will
continue to send requests to the stopped server until a request fails and
the server is marked unavailable. However, the proxy server might be told
that the member had been stopped before a client request is sent and
remove the member from the selection algorithm. This can eliminate
sending requests to an unavailable server. Again, this is accomplished
because the proxy receives its cluster information dynamically.
• HTTP session management
The routing behavior of both the HTTP plug-in and proxy server are
affected by how the HTTP session management and the distributed
sessions are configured for the application and servers. This configuration
involves session tracking, session recovery, and session clustering. When
an HTTP client interacts with a servlet supporting session management,
state information is associated with the HTTP session, is identified by a
session ID, and is then available for a series of the client requests. The
proxy server and HTTP plug-in both use session affinity to help achieve
higher cache hits in WebSphere Application Server and reduce database
access. When supporting session affinity, the proxy server and plug-in will
try and direct all requests from a client -- for a specific session -- back to
the same server on which the session was created. The default
mechanism is to read the JSESSIONID cookie information passed along
in the request, which contains the sessionId and serverId. This
information will inform the selecting code to try and select the same
member of the cluster for each request. Two other mechanisms that can
be used to support session affinity are URL rewriting and SSL ID value.
• HTTP session failover
To support session failover, it is important to know that session
management must be enabled and that distributed sessions are
configured. You can configure either database or memory-to-memory to
save session data in a distributed environment. Depending on which
setting is used, the HTTP plug-in and proxy server behave differently.
Choosing the best WebSphere Application Server workload management option
Page 14 of 20
© Copyright IBM Corporation 2010. All rights reserved.
ibm.com/developerWorks
developerWorks®
Both the proxy server and HTTP plug-in keep track of which servers own
which sessions by using the sessionId returned by the server. If session
data is to be maintained in a database, and the server the session
currently exists on fails, then one of the other cluster members will be
selected, the session information will be obtained from the database, and
the session will now be associated with this server.
However, if the session data is configured as memory-to-memory, then
WebSphere Application Server will select a primary and one or more
backup servers to keep backup data for each session. This enables a
failed session request to be sent to a server already holding backup data
for the session. The proxy server automatically supports this behavior.
The HTTP plug-in can be configured to support "hot session" failover, and
when this is done, a table of session information is obtained from the
WebSphere Application Server containing the appropriate mapping of
server to sessionId. If a request to a specific session fails, the plug-in will
make a special request to one of the other cluster members to obtain new
session data, which will contain the new server to session mapping
information that will be used for selection.
Table 1 summarizes the above comparisons.
Table 1. Comparison summary
Subject
Architecture
HTTP Plug-in
Proxy server
DMZ secure proxy
server
•
Integrated
into HTTP
Server
•
A reverse
caching
proxy
•
A reverse
caching
proxy
•
Uses a
static
configuration
file,
plugin-cfg.xml
•
Specialized
type of
WebSphere
application
server
•
Specialized
type of
WebSphere
application
server
•
Application
or
configuration
changes
require a
regen of
plugin-cfg.xml
file
•
Extendable
proxy filter
framework
•
Extendable
proxy filter
framework
•
Java
implementation
•
Java
implementation
•
No XML
files
needed;
dynamic
run time
cluster
data
(ODC,
•
No XML
files
needed;
dynamic
run time
cluster
data
(ODC,
Choosing the best WebSphere Application Server workload management option
© Copyright IBM Corporation 2010. All rights reserved.
Page 15 of 20
developerWorks®
ibm.com/developerWorks
UCF,
HAMgr)
Administration
Caching
•
Separate
installation
•
Runs
inside
HTTP
server
•
Requires
static
plugin-cfg.xml
created in
separate
Network
Deployment
install.
•
Manual
editing of
its
configuration
files
•
Can be
loosely
integrated
with
WebSphere
Application
Server
system
management
•
Static
•
Tightly
integrated
with
WebSphere
Application
Server
system
management
UCF,
HAMgr)
•
Dynamic
run time
cluster
data in low
and
medium
secure
mode
•
Uses static
configuration
file,
targetTree.xml,
in high
secure
mode
•
Separate
installation
•
Requires
admin
agent or
WebSphere
Application
Server
DMZ proxy
profile
•
Easily
created
•
WebSphere
Application
Server
console
and JMX
to
configure
•
Three
predefined
security
levels: low,
medium,
high
•
Static
•
Same as
Choosing the best WebSphere Application Server workload management option
Page 16 of 20
© Copyright IBM Corporation 2010. All rights reserved.
ibm.com/developerWorks
developerWorks®
content,
such as
images or
HTML files
Load balancing
Failover
content,
such as
images or
HTML files
•
Supports
using
FRCA to
cache
certain
dynamically
generated
serlvet and
JSP files
•
Dynamic
content,
such as
servlet
returned
information
•
Static
routing
uses
plugin-cfg.xml
file for
routing
data
•
Dynamic
routing
uses
dynamically
updated
routing
data
•
Must
regenerate
plugin-cfg.xml
file when
new
cluster
member is
created
and
started
•
Supports
generic
server
clusters
with
passive or
active
affinity
•
Automatically
informed
when new
cluster
member is
created
and
started
•
Does not
support
generic
server
clusters
with affinity
•
Must
detect
server is
unavailable
through a
failed
request
•
Is
automatically
notified
when
member
becomes
unavailable
•
Configured
setting
retryInterval.
Defines
time to
•
Automatically
informed
when
failed
server
Choosing the best WebSphere Application Server workload management option
© Copyright IBM Corporation 2010. All rights reserved.
non-DMZ
proxy
server
•
In low and
medium
secure
DMZ:
dynamic
routing
•
In high
secure
DMZ:
static
routing
•
In low and
medium
secure
DMZ:
supports
generic
server
clusters
with
passive or
active
affinity
•
Low and
medium
secure
DMZ:
same
function as
non-DMZ
supported
•
High
secure
DMZ: it
must
Page 17 of 20
developerWorks®
ibm.com/developerWorks
wait before
retrying a
failed
server
Routing behaviors
Routing transports
•
Supports
round-robin
selection
•
Can be
configured
as a
reverse
caching
proxy
•
Can
manually
edit
plugin-cfg.xml
file to
manipulate
routing
behavior
•
Uses
standard
HTTP
becomes
available
again
•
Proxy
server
actions,
rule
expressions,
routing
actions,
and
routing
rules
enable an
administrator
to control
routing
behavior
for various
business
reasons
•
Custom
advisor
support
•
Is a
reverse
caching
proxy
•
Supports
round-robin
and
random
selection
•
Uses the
highly
scalable
detect
server is
unavailable
through a
failed
request
•
Uses a
health
monitor to
detect
when
failed
server is
available
again
•
Same
functions
as
non-DMZ
proxy
server
•
Same
functions
as
Choosing the best WebSphere Application Server workload management option
Page 18 of 20
© Copyright IBM Corporation 2010. All rights reserved.
ibm.com/developerWorks
developerWorks®
transport
protocol
Routing protocol
•
HTTP
WebSphere
Application
Server
transport
channel
framework
•
HTTP
•
Session
Initiation
Protocol
(SIP)
•
Web
services
such as
WS-Addressing
non-DMZ
proxy
server
•
Low and
medium
secure
DMZ:
same
function as
non-DMZ
supported
•
High
secure
DMZ only
HTTP
Conclusion
While both the HTTP plug-in and the WebSphere proxy server deliver some
overlapping modes of operation and capabilities, the WebSphere proxy server can
be the more intelligent solution in many cases. However, "more intelligent" in this
case can often times also mean more complex. Many deployments will probably
continue to use the HTTP plug-in until they reach a point where improving caching or
advanced routing becomes a critical requirement.
Choosing the best WebSphere Application Server workload management option
© Copyright IBM Corporation 2010. All rights reserved.
Page 19 of 20
developerWorks®
ibm.com/developerWorks
Resources
• Know your proxy server basics
• Redbook: WebSphere Application Server Network Deployment V6: High
Availability Solutions
• Getting "Out Front" of WebSphere: The HTTP Server Plugin
• Information Center
• Setting up the proxy server
• Installing a DMZ Secure Proxy Server for IBM WebSphere Application
Server
• Configuring a DMZ Secure Proxy Server using the administrative console
• Installing IBM HTTP Server
• Configuring IBM HTTP Server
• Installing Web server plug-ins
• Plug-ins configuration
• IBM developerWorks WebSphere
About the authors
John Pape
John Pape currently works with the WebSphere SWAT Team and focuses on crit-sit
support for clients who utilize WebSphere Application Server, WebSphere Portal
Server, and WebSphere Extended Deployment. This role requires attention to detail
as well and maintaining a “think-out-of-the-box” innovative mindset, all the while
assuring IBM customers get the best support possible!
Robert Westland
Robert Westland is the architect of the WebSphere Application Server Work Load
Management(WLM) component. He currently works on the WLM and HA Team at
IBM Rochester Minnesota. His focus is on the architecture of the WLM support for
the WebSphere Application Server. He has been working in the WebSphere
Application Server organization and on the WLM team for 8 years.
Choosing the best WebSphere Application Server workload management option
Page 20 of 20
© Copyright IBM Corporation 2010. All rights reserved.
Tips for working smarter and increasing
productivity with WebSphere Integration Developer
Skill Level: Intermediate
Diana Lau ([email protected])
Software Developer
IBM
06 Oct 2010
Dealing with a high number of modules, artifacts, and projects at once in IBM®
WebSphere® Integration Developer can be overwhelming at times. However, there
are steps you can take and features you can leverage that can help you not only
become better organized, but also help you improve build time and increase your
productivity as well. This article provides tips to help you make this happen.
Introduction
This article shows you some valuable hints and tips to help you use IBM WebSphere
Integration Developer V7 more efficiently, especially when you are dealing with a
large number of modules and artifacts. These tips include different ways to reduce
workspace build time and publish time, how to use the Test Client to create test
cases and organize them in test projects, and and how to use the cross component
trace to make the unit test phase more efficient. In addition, you will learn how to
reduce clutter in the work space by maximizing the work space in various editors
and how to use the filtering options to filter out unnecessary artifacts.
The tips covered here are presented in these major areas:
1.
Reducing workspace build time and publish time
2.
Testing in the WebSphere test environment
3.
Better artifact organization and team collaboration
Tips for working smarter and increasing productivity with WebSphere Integration Developer
© Copyright IBM Corporation 2010. All rights reserved.
Page 1 of 14
developerWorks®
ibm.com/developerWorks
4.
Maximizing your working space
5.
Using the search and filter capabilities
1. Reducing workspace build time and publish time
The more artifacts you have in the workspace, the higher the memory consumption
and the longer the build time. There are a few ways to reduce workspace build time:
a.
Close or remove the unused projects
b.
Turn on the XSD and WSDL validation filtering as needed
c.
Use the “Do not participate in clean” option in libraries
d.
Leverage the Build Activities view to control validation and publishing
e.
Test the XML map locally without deploying the application to the server
1a. Closing or removing the unused projects
All the artifacts in open projects are added to the internal index system in
WebSphere Integration Developer (hereafter called Integration Developer), which in
turn consumes memory and affects build time. To avoid unnecessary high memory
usage, close or remove the projects that you do not need.
Having multiple roles in a business process management (BPM) project, you might
want to consider having different workspaces when working on different parts of the
projects. This avoids unnecessary projects being loaded in the workspace. You can
use an integration solution to make this process clear and easy. The user can
always see all the modules and libraries that are part of the solution and can load or
unload parts of it as needed.
1b. Turning on XSD and WSDL validation filtering
When dealing with industry schemas, such as standards from Open Travel Alliance
(OTA), Association for Cooperative Operations Research and Development
(ACORD), or third party schemas, you cannot change the XSD and WSDL files.
Therefore, there is no need to validate every time during workspace builds. In the
Properties dialog of a module or library, you can specify all the XSD and WSDL files
or the groups of namespace prefixes that are not to be validated by selecting
Business Integration > XSD and WSDL validation filtering, as shown in Figure 1.
You can add your own filters on top of the predefined ones. The options are shown
in Figure 2.
Tips for working smarter and increasing productivity with WebSphere Integration Developer
Page 2 of 14
© Copyright IBM Corporation 2010. All rights reserved.
ibm.com/developerWorks
developerWorks®
Figure 1. Project properties under Business Integration category
Figure 2. Options for XSD and WSDL validation filtering
1c. Using the “Do not participate in Clean” option in Library
Similar to the XSD and WSDL validation filtering, you can enable the “Do not
participate in Clean” option if you do not want to revalidate, rebuild, and update
markers in the files within a library when performing “clean all”. In the Properties
dialog of a library, select Business Integration > Participation in Clean. On the
right panel, you see the checkbox for this option as shown in Figure 3.
Figure 3. Participation in Clean option in Properties dialog
1d. Leveraging the Build Activities view to control validation and publishing
The Build Activities view allows you to select workspace activities to run during a
build. The “Validate and update deploy code” is the default selection, which is also
the recommended one (Figure 4). This means that when you save the changes, the
server is not updated even though the affected applications are deployed on the
Tips for working smarter and increasing productivity with WebSphere Integration Developer
© Copyright IBM Corporation 2010. All rights reserved.
Page 3 of 14
developerWorks®
ibm.com/developerWorks
server. The explicit action of republishing the changes to the server saves the
unnecessary time spent on updating the server when the application is not ready to
be republished.
Figure 4. Build Activities View
For more details, see the Build Activities view section in the Integration Developer
Information Center.
1e. Testing the XML map
The XML map editor tests the mapping locally without starting the server and
deploying the module. You can invoke the Test Map function from the toolbar
(Figure 5) or the context menu of the map (Figure 6).
Figure 5. Test Map toolbar item in XML map editor
Figure 6. Test Map menu item
Tips for working smarter and increasing productivity with WebSphere Integration Developer
Page 4 of 14
© Copyright IBM Corporation 2010. All rights reserved.
ibm.com/developerWorks
developerWorks®
For more details, refer to the Test XML Maps section in Integration Developer
Information Center.
2. Testing in the WebSphere test environment
Deploying your application and testing it on the Test Client allows you to quickly
verify the components that they are currently developing. In the case where there
are components in a module that call out to different services in other modules, there
are a few alternatives to offload the WebSphere test environment (WTE):
a.
Moving applications to a common development test environment
b.
Increasing the WebSphere test environment heap size
c.
Organizing tests using the Integration Test Client
d.
Using cross component trace
2a. Moving applications to a common development test environment
For the modules that are only needed for service calls, you may not need to install
those applications on the same machine as the WebSphere test environment. That
means you may not need to import the modules that are not working in the
Tips for working smarter and increasing productivity with WebSphere Integration Developer
© Copyright IBM Corporation 2010. All rights reserved.
Page 5 of 14
developerWorks®
ibm.com/developerWorks
workspace.
You can set up a common development test environment to deploy those services
on a separate server. It not only offloads your own WTE, but you can also improve
the build time so that fewer numbers of artifacts are added to the workspace.
Another advantage is that each developer can deploy the latest update of the
applications to the development test server for other developers to call.
2b. Increasing the WebSphere test environment heap size
The WTE is supposed to be used for testing smaller scale of applications. If the
default setting of the server’s heap size is not sufficient (that is, when you get
OutOfMemoryError when running the applications), you may want to increase the
heap size of the server. For more details, see the WebSphere Integration Developer
frequently asked questions page.
2c. Organizing tests using the Integration Test Client
Use the Integration Test Client in Integration Developer to help organize test cases
and to make testing easier. You can learn more test case support from this
developerWorks article, Taking component testing to the next level in WebSphere
Integration Developer. Once deployed, you can also run the test cases through the
web browser without Integration Developer.
2d. Using cross component trace
When there are numerous components and modules involved in your BPM solution,
it is helpful to trace and identify where the unexpected behavior occurred. Cross
component trace can help you in problem determination when your application is not
running as expected. To enable the cross component trace, go to the Server Logs
view and then select the View Menu icon > Cross-Component Trace State as
shown in Figure 7. By default, it is disabled.
Figure 7. Enable cross component trace
Figure 8 shows a sample output from the cross component trace that involves an
SCA component in a module that is calling a component in another module.
Figure 8. Sample output of cross component trace
Tips for working smarter and increasing productivity with WebSphere Integration Developer
Page 6 of 14
© Copyright IBM Corporation 2010. All rights reserved.
ibm.com/developerWorks
developerWorks®
3. Better artifact organization and team collaboration
This section provides tips on:
a.
Using shared libraries
b.
Using sticky notes
c.
Enhanced team support in mediation flow
d.
Using the Integration Solution
3a. Using shared libraries
Business objects and interfaces are the building blocks in Integration Developer.
They are often being referenced in multiple modules. For better reuse, it is always a
good idea to put those common artifacts in a library. It also makes the design
cleaner. Instead of having multiple business object definitions, which is meant to be
the same, located in multiple locations, you only need to make changes in one place
by creating a business object in the library and having the modules referencing the
library.
If the library is selected to be deployed with the module in the dependencies editor,
the module is packaged as an EAR and contains a copy of the library JAR file during
deploy time. If there are multiple modules referencing the same library, each EAR
will have a copy of the library. To reduce the deploy time, you can set up the shared
library on the server.
Note that when these common artifacts are changed in the library, the server needs
to be restarted for changes to be effective.
For configuration details about a shared library, refer to this technote.
3b. Using sticky notes
Sticky notes serve as reminders to yourself or others, and do not replace the
description fields in the properties of the components. You can add notes in the
Assembly editor and the Process editor.
Tips for working smarter and increasing productivity with WebSphere Integration Developer
© Copyright IBM Corporation 2010. All rights reserved.
Page 7 of 14
developerWorks®
ibm.com/developerWorks
Besides adding text, you can add tags and hyperlinks to the note. The “TODO” and
“FIXME” tags are predefined Java™ compiler tags. Hence, they appear in the Task
view. You an also define hyperlinks as depicted in Figure 9.
Figure 9. Sticky note
In addition, you can define your own custom tags. This is also done through the Java
compiler settings. To do so, switch to Java perspective and then go to the
Preferences page. From Java > Compiler > Task Tags, you can add your own
(Figure 10).
Figure 10. Configure your own tag
3c. Enhanced team support in mediation flow
To minimize the chance of managing conflicts with other developers, we recommend
that developers work on separate components, which correspond to separate
physical resources. When creating a mediation flow, you can use an option to save
the mediation flow in a single file or multiple files (Figure 11). If the latter option is
selected, a new physical file is created when an operation connection is made. This
allows multiple developers to work on different mediation flows in the same
mediation flow component.
Figure 11. New Mediation Flow wizard
Tips for working smarter and increasing productivity with WebSphere Integration Developer
Page 8 of 14
© Copyright IBM Corporation 2010. All rights reserved.
ibm.com/developerWorks
developerWorks®
You can also configure this in the Preferences page by selecting Business
Integration > Mediation Flow Editor (Figure 12).
Figure 12. Preference to set options for creating mediation flow
3d. Using the Integration Solution
Introduced in V6.2, the Integration Solution section in the Business Integration view
helps you to organize related modules and libraries. It also shows the relationships
(or bindings) with other modules.
Other benefits include:
• Easier team development. You can check in and check out solution,
together with the projects associated with it
• Easier to organize documentations. You can add documents related to
the solution, such as design and architectural documents.
• Easier to load and unload modules that are relevant to the tasks at hand,
Tips for working smarter and increasing productivity with WebSphere Integration Developer
© Copyright IBM Corporation 2010. All rights reserved.
Page 9 of 14
developerWorks®
ibm.com/developerWorks
while the user still has a full picture of the entire solution.
4. Maximizing your working space
In the mediation flow editor, BPEL editor, and Assembly editor, you can collapse the
trays to maximize your working space, as shown in Figure 13.
Figure 13. Maximize your working space in the Mediation Flow editor
Figure 14 shows the screen capture after you collapsed all the trays.
Figure 14. Mediation Flow editor after all the trays are collapsed
Similarly, you can do the same thing for the BPEL editor and Assembly editor,
depicted in Figure 15.
Figure 15. Maximize your workspace in the BPEL editor
Tips for working smarter and increasing productivity with WebSphere Integration Developer
Page 10 of 14
© Copyright IBM Corporation 2010. All rights reserved.
ibm.com/developerWorks
developerWorks®
In the Test Client, you can also maximize the pane, depending on the current task
(Figure 16).
Figure 16. Maximize different pane in the Test Client
5. Using the search and filter capabilities
The Open Artifact dialog and Artifact Search dialog help you to find a specific artifact
more easily.
You can open the Open Artifact dialog from the toolbar (Figure 17). All the artifacts
that you see in the Business Integration view are available for selection.
Figure 17. Open Artifact toolbar item
In the Search dialog, there is a Business Integration Search tab as shown in
Tips for working smarter and increasing productivity with WebSphere Integration Developer
© Copyright IBM Corporation 2010. All rights reserved.
Page 11 of 14
developerWorks®
ibm.com/developerWorks
Figure 18. You can limit your search based on the type, name, or namespace of the
artifacts that you are looking for.
Figure 18. Search dialog
5a. Using the References view
The References view shows the relationship between an object selected in the
Business Integration view and the artifacts that it references. Therefore, it saves you
time to navigate through the artifacts to find out their dependencies.
Take the example shown in Figure 19. By selecting MyBO1 in the Business
Integration view, you can quickly see that it references MyChildBO, which is a
business object. This is being referenced by “MyInf1”, which is an interface.
Figure 19. Business Integration view with References view
Tips for working smarter and increasing productivity with WebSphere Integration Developer
Page 12 of 14
© Copyright IBM Corporation 2010. All rights reserved.
ibm.com/developerWorks
developerWorks®
Conclusion
This article provided hints and tips that can potentially improve your productivity
when using WebSphere Integration Developer. Reducing workspace build time and
publish time are one of the key areas that you can look at when you are dealing with
a large number of artifacts and projects. You can see a significant improvement in
the build time when adopting those tips.
This article also discussed how to use the WebSphere test environment, the Test
Client, and the cross component trace capability to make the unit testing effort more
efficient. In addition, better artifact organization and maximizing your workspace
space inside Integration Developer can also help your productivity.
Acknowledgements
The author would like to thank Phil Coulthard and Grant Taylor for their technical
review of this article.
Tips for working smarter and increasing productivity with WebSphere Integration Developer
© Copyright IBM Corporation 2010. All rights reserved.
Page 13 of 14
developerWorks®
ibm.com/developerWorks
Resources
Learn
• WebSphere Integration Developer V7 Information Center
• Rational Application Developer performance tips
• WebSphere Integration Developer Information Center: Build Activities view
• WebSphere Integration Developer Information Center: Testing XML Maps
• Team development using CVS
• XML mapping in WebSphere Integration Developer, Part 1
• Taking component testing to the next level in WebSphere Integration Developer
• Using shared libraries on WebSphere Process Server
• Frequently asked questions (FAQs) about WebSphere Integration Developer
Discuss
• WebSphere Integration Developer discussion forum
About the author
Diana Lau
Diana Lau is a Software Developer on the WebSphere Business Process
Management SWAT team at the IBM Toronto Software Lab, Canada. She works
closely with customers to resolve technical issues and provide best practices for
implementing BPM solutions.
Tips for working smarter and increasing productivity with WebSphere Integration Developer
Page 14 of 14
© Copyright IBM Corporation 2010. All rights reserved.
Exploring the WebSphere Application Server
Feature Pack for SCA, Part 8: Java EE support in
the Feature Pack for SCA
Skill Level: Intermediate
Anbumunee Ponniah ([email protected])
Software Engineer
IBM
Chao M Beck
Software Engineer
IBM
Vijai Kalathur ([email protected])
Software Engineer
IBM
06 Oct 2010
This article describes the integration of Java™ EE support in the IBM® WebSphere®
Application Server Feature Pack for Service Component Architecture (SCA). The
feature pack supports use of Java EE archives as SCA component implementations,
the consumption of SCA exposed services from Java EE components, and the
exposure of stateless session EJB services as SCA services with support for rewiring
those services.
Introduction
Java Platform, Enterprise Edition (Java EE) is the most prevalent and widely
adopted standard for enterprise application development using the Java
programming language. While the Java EE specification provides a rich set of
technologies, it lacks support for extensible component implementation
technologies, for extensions to abstract transport and protocol assembly, and for
deploying components that transcend application boundaries. As such, it falls
Java EE support in the Feature Pack for SCA
© Copyright IBM Corporation 2010. All rights reserved.
Page 1 of 10
developerWorks®
ibm.com/developerWorks
somewhat short of supporting a true service-oriented architecture (SOA).
The IBM WebSphere Application Server Feature Pack for Service Component
Architecture (SCA) extends its own assembly, implementation type, deployment, and
quality of service concepts to a Java EE application within that application's context.
This enables the use of Java EE components as service component
implementations, and also makes it possible to consume SCA artifacts, such as
services and properties, from Java EE modules using SCA annotations. Additionally,
it enables the ability to expose Enterprise JavaBean™ (EJB) services as SCA
services, and to rewire EJB references.
Java EE archives as SCA component implementations
There are two basic scenarios for using Java EE archives as SCA component
implementations:
• The first is a non-SCA-enhanced scenario, in which a Java EE archive
is made available in an SCA domain for use by SCA components using
other implementation technologies.
• The second is an SCA-enhanced scenario, in which Java EE modules
can make use of the benefits of an SCA, such as rewiring, defining SCA
properties, defining references to other SCA services in the domain, and
using dependency injection.
These two types of integration scenarios are illustrated in Figure 1.
Figure 1. Java EE integration scenarios
Non-SCA-enhanced scenario
A Java EE archive representing a Java EE application and containing Java EE
modules (such as EJB and Web modules) can be used as an SCA component
Java EE support in the Feature Pack for SCA
Page 2 of 10
© Copyright IBM Corporation 2010. All rights reserved.
ibm.com/developerWorks
developerWorks®
implementation participating in an SCA composite. The SCA feature pack supports
an external EAR contribution, in which the Java EE archive is packaged and
deployed outside of the SCA contribution.
In the simplest scenario, if you have a Java EE archive named HelloJeeEar.ear and
an SCA contribution JAR named HelloJeeSca.jar with a deployable composite, then
the Java EE archive can be used as a component implementation in the SCA
contribution by means of the syntax shown in Listing 1.
Listing 1. Sample SCA composite
<<?xml version="1.0" encoding="UTF-8"?>
<composite xmlns="http://www.osoa.org/xmlns/sca/1.0"
xmlns:xsd="http://www.w3.org/2001/XMLSchema"
targetNamespace="http://foo" name="HelloJeeScaServiceComposite">
<component name="HelloJeeComponent">
<implementation.jee archive="HelloJeeEar.ear"/>
</component>
<component name="MyScaComponent">
<implementation.java class=”sca.jee.HelloJeeScaServiceImpl”/>
<service name="HelloJeeScaService">
<interface.java interface="sca.jee.HelloJeeScaService"/>
</service>
</component>
</composite>
In accordance with the OSOA (Open Service Oriented Architecture) Java EE
Integration specification (see Resources), the derived component type will contain
these services and references:
• Each EJB 3 business interface with the unqualified name intf of a
session bean bean translates into a service by the name bean_intf.
• Each EJB 3 reference with the name ref of a session bean bean
translates into an SCA reference with the name bean_ref.
In the above example, if HelloJeeEar.ear contains an EJB module HelloJeeEjb.jar
that has a session bean HelloJeeEjb with a remote business interface
HelloJeeSBeanRemote, then the name of the derived service will be
HelloJeeEjb_HelloJeeSBeanRemote. The service interface derived will consist
of all the methods of the EJB business interface.
The derived services can be referred to in other SCA components, just as with any
other SCA service. It is important to note that the service interface will be remotable
if and only if it is derived from the bean's remote business interface. Thus, without
any need to change the implementation code, the EJB services and EJB references
from a Java EE archive can become part of a SCA composite.
SCA-enhanced scenario
Java EE support in the Feature Pack for SCA
© Copyright IBM Corporation 2010. All rights reserved.
Page 3 of 10
developerWorks®
ibm.com/developerWorks
The SCA feature pack does not support the use of a Java EE archive as an SCA
contribution, but does support the use of such an archive as a component
implementation within a deployable composite in an SCA contribution; such an
implementation is an SCA-enhanced scenario. In such an implementation, the
Java EE archive must include a distinguished composite file named
application.composite in its META-INF directory, as illustrated in Figure 2.
Figure 2. application.composite
When such a Java EE archive is used as an SCA component implementation within
the implementation.jee directive, the SCA runtime automatically includes the artifacts
specified in such a composite when deriving the component type. In order to expose
any EJB services as SCA services, or to consume SCA services,
application.composite needs to include components with EJB or Web
implementation types using the deployable EJB and Web modules packaged in the
same archive.
For example, consider a Java EE archive named HelloJeeEnhancedEar.ear
containing an EJB module named HelloJeeEnhancedEjb.jar, which in turn contains
a stateless session bean named HelloJeeEnhancedSBean and a Web module
named HelloJeeEnhancedWeb.war. The archive might have the application
composite shown in Listing 2.
Listing 2. application.composite
<?xml version="1.0" encoding="UTF-8"?>
<composite xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://www.osoa.org/xmlns/sca/1.0" xmlns:foo="http://foo"
xmlns:xsd="http://www.w3.org/2001/XMLSchema"
xsi:schemaLocation="http://www.osoa.org/xmlns/sca/1.0
http://www.osoa.org/xmlns/sca/1.0"
name="EnhancedEarComposite"
targetNamespace="http://foo" autowire="false">
<service name="HelloJeeEnhancedSBean_HelloJeeEnhancedSBeanRemote"
promote="EnhancedSbeanComponent/HelloJeeEnhancedSBean_HelloJeeEnhancedSBeanRemote">
<binding.sca/>
</service>
<reference name="sbean2" promote="EnhancedSbeanComponent/sbean2"
target="HelloJeeScaComponent/HelloJeeScaService">
<interface.java interface="sca.jee.HelloJeeScaService" />
</reference>
<property name="propertyejb"
<property name="propertyweb"
Java EE support in the Feature Pack for SCA
Page 4 of 10
type="xsd:string">EJBIBM</property>
type="xsd:string">WEBIBM</property>
© Copyright IBM Corporation 2010. All rights reserved.
ibm.com/developerWorks
developerWorks®
<component name="EnhancedSbeanComponent">
<implementation.ejb ejb-link="HelloJeeEnhancedEjb.jar#HelloJeeEnhancedSBean"/>
<service name="HelloJeeEnhancedSBean_HelloJeeEnhancedSBeanRemote">
<interface.java interface="sca.jee.HelloJeeEnhancedSBeanRemote" />
</service>
<reference name="sbean2">
<interface.java interface="sca.jee.HelloJeeScaService" />
</reference>
<property name="propertyejb" type="xsd:string">IBMEJB</property>
</component>
<component name="EnhancedWebComponent">
<implementation.web web-uri="HelloJeeEnhancedWeb.war"/>
<property name="propertyweb" type="xsd:string">IBMWEB</property>
</component>
</composite>
In Listing 2, the individual components specify the SCA services and references. If
any of the services and references defined in the components in the application
composite need to be exposed to the SCA domain via the component that
implements this Java EE archive, they must be promoted from within the application
composite. Any properties needed by individual modules must also be defined at the
composite level. In the above example, the SCA property named propertyweb will
have a value of WEBIBM and not IBMWEB in the derived component.
The use of application.composite enables the EJB services exposed as SCA
services to now be rewired to SCA-compatible bindings such as binding.ws and
binding.jms. The external SCA contribution using this Java EE archive as an SCA
component implementation would look similar to Listing 3.
Listing 3. An external SCA contribution in application.composite
...
<component name="HelloJeeEnhancedComponent">
<implementation.jee archive="HelloJeeEnhancedEar.ear"/>
<service name="HelloJeeEnhancedSBean_HelloJeeEnhancedSBeanRemote">
<interface.java interface="sca.jee.HelloJeeEnhancedSBeanRemote" />
<binding.ws/>
</service>
<reference name="sbean2" target="HelloJeeScaComponent/HelloJeeScaService">
<interface.java interface="sca.jee.HelloJeeScaService" />
</reference>
</component>
...
Figure 3 illustrates how these components fit together in a typical Java EE
implementation.
Figure 3. Java EE as an SCA implementation
Java EE support in the Feature Pack for SCA
© Copyright IBM Corporation 2010. All rights reserved.
Page 5 of 10
developerWorks®
ibm.com/developerWorks
The SCA references and properties can be accessed in EJB and Web modules
through annotations that are injected with the defined values at run time. Listing 4
provides an example.
Listing 4. Accessing SCA references and properties through annotations
...
import javax.ejb.Stateless;
import org.osoa.sca.annotations.*;
...
@Stateless
// EJB annotation
public class AccountServiceImpl implements AccountService {
@Reference protected Brokerage backend;
// SCA reference
@Property protected String currency;
// SCA property
@Context protected SCAContext context;
// SCA context
public AccountReport getAccountReport(String customerId) {
acctValue BigDecimal = Brokerage.getAccountValue(customerID,“IBM”;
// use injected reference
if (currency != “S DOLLARS” {
// use injected property
moneyChangerService = context.getService(moneyChanger.class,”oneyExchange”;
// use injected context
acctValue = moneyChangerService(current,acctValue);
// invoke SCA service
}
return backend(customerId, acctValue);
}
Another interesting aspect of defining SCA references through the
application.composite file is the ability to override EJB references with compatible
SCA references. As long as the interface of the SCA reference matches that of the
EJB reference, the SCA run time injection will override that EJB. For example, as
shown in Listings 5 and 6, the EJB reference sbean2 can be overridden by an SCA
reference defined in application.composite. The bean class includes the annotation
in Listing 5, and the application.composite file includes the <reference> tag in
Listing 6.
Java EE support in the Feature Pack for SCA
Page 6 of 10
© Copyright IBM Corporation 2010. All rights reserved.
ibm.com/developerWorks
developerWorks®
Listing 5. Annotation for overriding EJB references
@Stateless
public class HelloJeeEnhancedSBean implements HelloJeeEnhancedSBeanRemote,
HelloJeeEnhancedSBeanLocal {
@EJB HelloJeeReferencedBeanRemote sbean1;
@EJB HelloJeeReferencedBeanRemote sbean2;
//This will be overridden with a SCA reference offering same operation
/**
Listing 6. <reference> tag for overriding EJB references
....
<reference name="sbean2" promote="EnhancedSbeanComponent/sbean2"
target="HelloJeeScaComponent/HelloJeeScaService">
<interface.java interface="sca.jee.HelloJeeScaService" />
</reference>
....
In the above example, the injected value for sbean2 would be that of the SCA
reference sbean2 and not the EJB reference of the same name. Notice that SCA
treats references as unique across the composite when handling promoted
references. You should therefore pay attention to cases where EJB references with
the same name resolve to different EJB modules.
Deployment
If you intend to expose EJB services and references as SCA services and
references, and are happy with the auto-generated names of the services and
references, no changes are needed in your current implementation.
If, however, you want to limit the services exposed or want to rewire EJB references
and services, then the Java EE archive should at minimum have an
application.composite file. Only the services and references promoted in that
composite file would then be used when the SCA runtime derives the component
type. If you want the Java EE modules to access SCA artifacts, only then would the
implementation need to be changed.
For a user with a Java EE archive, the assembly and deployment steps in a
WebSphere Application Server environment would be:
1.
Create a new SCA JAR with a deployable composite that uses the Java
EE archive as its component implementation using implementation.jee.
2.
Import both the Java EE archive and the SCA JAR as assets.
Java EE support in the Feature Pack for SCA
© Copyright IBM Corporation 2010. All rights reserved.
Page 7 of 10
developerWorks®
ibm.com/developerWorks
3.
Create a business-level application (BLA) and add the Java EE archive
and SCA JAR to the BLA as deployed assets. The order is important: all
Java EE archives used in an SCA contribution as SCA component
implementations must be deployed before the SCA contribution is
deployed.
4.
Start the BLA.
Security
The Java EE platform supports authorization and security identity policies. The
support security QoS is enforced by the underlying Java EE container. The use of
authorization is supported in both SCA-enhanced and non-enhanced Java EE
archives. The authorization and security identity policies for EJBs that are
referenced by the implementation.jee element are enforced by specifying the
security constraints in the EJB deployment descriptor (ejb-jar.xml) in the EAR. These
are used in conjunction with the interaction policies on the bindings to authenticate
and authorize access to the Java EE components. Administrative and application
security both need to be enabled in order for security roles be enforced. Any SCA
policy sets attached to an implementation.jee component with security and run-as
polices will be ignored.
Transactions
Transaction support for services defined in an implementation.jee component are
handled by the Java EE container. Transaction attributes for the service are
specified in the EJB deployment descriptor (ejb-jar.xml) in the EAR. To find a
description of how SCA intents map to Java EE transaction attributes, see section
5.3 of the SCA Java EE Integration Specification, titled Mapping of EJB Transaction
Demarcation to SCA Transaction Policies (see Resources). To propagate or
suspend transactions for references in an implementation.jee component, specify
the required SCA transaction intents in the composite file.
Conclusion
There are a number of benefits to integrating Java EE with SCA.
• EJB services can be exposed as SCA services and then re-wired over
various wire formats.
• EJB module EJBRefs can be rewired using SCDL without changing the
underlying Java EE artifacts.
Java EE support in the Feature Pack for SCA
Page 8 of 10
© Copyright IBM Corporation 2010. All rights reserved.
ibm.com/developerWorks
developerWorks®
• An SCA programming model can be used to invoke business services in
Java EE components.
• Remotable services can be made available as SCA services over the
SCA default binding without the need for defining an SCDL.
• Services can be intermixed using other implementation types, such as
implementation.java and implementation.wsdl.
In summary, the WebSphere Application Server Feature Pack for SCA enables Java
EE programmers and architects to transcend differences in implementation
technologies and leverage a service component architecture with little or no changes
to their implementation, making it easier for them to take advantage of existing code
while exploring an SCA.
Java EE support in the Feature Pack for SCA
© Copyright IBM Corporation 2010. All rights reserved.
Page 9 of 10
developerWorks®
ibm.com/developerWorks
Resources
• SCA Service Component Archiitecture: Java EE Integration Specification.
• IBM Education Assistant: IBM WebSphere Application Server V7.0 Feature
Pack for Service Component Architecture.
• WebSphere Application Server Information Center.
• Other articles in this series
• IBM developerWorks WebSphere
About the authors
Anbumunee Ponniah
Anbumunee Ponniah is a developer/tester involved in implementing JavaEE
support in SCA Feature pack. Anbu has a rich background covering many technical
areas ranging from Unix internals to application programming. He has previously
been a technical lead for Java class libraries in IBM's Java technology center.
Chao M Beck
Chao Beck is a technical lead for the feature pack for Service Component
Architecture (SCA) early program. She has long been a member of the Application
Integration Middleware early programs team responsible for the execution of early
programs for IBM WebSphere Application Server products. She handles the
development and delivery of education for new product functions and the provision of
customer support during pre-GA (early) programs and post-GA (customer
acceleration) programs.
Vijai Kalathur
Vijai Kalathur is part of the SCA Feature Pack QoS team. Since joining IBM in 2005
he has worked on the WebSphere Application Server for z/OS security team and the
SCA Feature Pack team. He has worked on various components of the SCA Feature
Pack including admin, security, transactions, and JMS binding.
Java EE support in the Feature Pack for SCA
Page 10 of 10
© Copyright IBM Corporation 2010. All rights reserved.
The Support Authority: Running WebSphere
Application Server as a Windows service
Skill Level: Introductory
Alain Del Valle ([email protected])
WebSphere Application Server L2 Team
IBM
Dr. Mahesh Rathi ([email protected])
WebSphere Application Server SWAT Team
IBM
06 Oct 2010
IBM® WebSphere® Application Server can run as a Windows® service. A Windows
service can run under a local user account, a domain user account, or the
LocalSystem account. This article will help a domain administrator set up a
WebSphere Application Server to run as a Windows service under a domain user
account . This process involves the domain administrator logging in to the local
machine and providing the correct rights for the domain user.
In each column, The Support Authority discusses resources, tools, and other
elements of IBM® Technical Support that are available for WebSphere® products,
plus techniques and new ideas that can further enhance your IBM support
experience.
This just in...
As always, we begin with some new items of interest for the WebSphere community
at large:
• Check out the IBM Conferences & Events page for a list of upcoming
conferences. The IBM European WebSphere Technical Conference is a
4.5 day event to be held 11-15 October 2010 in Düsseldorf, Germany.
This event combines the WebSphere and Transaction & Messaging
Running WebSphere Application Server as a Windows service
© Copyright IBM Corporation 2010. All rights reserved.
Page 1 of 14
developerWorks®
ibm.com/developerWorks
Conferences of previous years into one seamless agenda, offering two
great conferences for the price of one. This year’s conference will be
co-located with the Portal Excellence Conference, dedicated to portal
business solutions and technical strategies..
• Earlier this year, the IBM Support Portal was named one of the Top Ten
Support Sites of 2010 by the Association of Support Professionals. Have
you tried the IBM Support Portal yet? All IBM software products are now
included, and all software product support pages have been replaced by
IBM Support Portal. See the Support Authority's Introduction to the new
IBM Support Portal for details.
• Learn, share, and network at the IBM Electronic Support Community blog
on developerWorks.
• Check out the new Global WebSphere Community at
websphereusergroup.org. Customize the content on your personalized
GWC page and connect to other "WebSpherians" with the same interests.
• Several exciting webcasts are planned in through October at the
WebSphere Technical Exchange. Check the site for details and become a
fan on Facebook!
Continue to monitor the various support-related Web sites, as well as this column,
for news about other tools as we encounter them.
And now, on to our main topic...
Leveraging Windows services
A Windows service can be run in the security context of a local user account, a
domain user account, or the LocalSystem account. To help decide which account to
use, an administrator will install the service with the minimum set of permissions
required to perform the service operations, will typically create a domain user
account for the service, and grant that account the specific access rights and
privileges required by the service at run time.
There can be many reasons you might want to do this. Windows services typically
live on each local machine and can be controlled by a local user or a domain user. In
some cases, it can be beneficial to set up the service to run as a domain user. For
example, if multiple machines are set up to run IBM WebSphere Application Server
as a service, a domain user account can be set up to control all those services. If a
password ever needs to be changed, it can be modified in just the domain controller
for that user. If local system users were to run the services, the password would
need to be changed in every machine instead of just once for the user in the domain
controller. When the password changes for a user that is running a Windows
service, the only way to get the service to work again is to update the service and
Running WebSphere Application Server as a Windows service
Page 2 of 14
© Copyright IBM Corporation 2010. All rights reserved.
ibm.com/developerWorks
developerWorks®
repeat all the steps.
The task of setting up WebSphere Application Server to run, as a Windows service
under a domain user account, can be complicated. This article explains the general
information you need to accomplish this setup in Windows Server 2003. You will
learn how to create the Windows service using the WASServiceCmd utility and how
to change the service to log on as the domain user account.
For the purpose of this article, it is assumed that the local machine is already part of
the domain. Be aware that once the machine is added to the domain, the group for
Domain Admins is added by default on the local machine, shown in Figure 1.
We’ll refer to two different users located in the Active Directory of the domain
controller:
• alainadmin: A domain administrator in the domain controller, shown in
Figure 2.
• alainuser: A domain user with basic user rights, not an administrator in
the domain controller. This is the user for which the setup is being run,
shown in Figure 3.
Figure 1. Domain Admins group gets added by default when machine is added
to domain
Running WebSphere Application Server as a Windows service
© Copyright IBM Corporation 2010. All rights reserved.
Page 3 of 14
developerWorks®
ibm.com/developerWorks
Figure 2. Shows alainadmin is a member of Domain Admins group
Running WebSphere Application Server as a Windows service
Page 4 of 14
© Copyright IBM Corporation 2010. All rights reserved.
ibm.com/developerWorks
developerWorks®
Figure 3. Shows alainuser is a member of Domain Users group
Running WebSphere Application Server as a Windows service
© Copyright IBM Corporation 2010. All rights reserved.
Page 5 of 14
developerWorks®
ibm.com/developerWorks
Specific rights are required by the operating system to be able to run the domain
user. To set up and run this function on a Microsoft Windows operating system, the
user must belong to the administrator group and have these advanced user rights:
• Act as part of the operating system.
• Log on as a service.
To demonstrate, let’s step through the procedure:
Running WebSphere Application Server as a Windows service
Page 6 of 14
© Copyright IBM Corporation 2010. All rights reserved.
ibm.com/developerWorks
developerWorks®
1.
Log on to the local machine with a user that has Domain Administrator
rights (alainadmin).
2.
Add the domain user to the Administrators group of the local machine
(alainuser), shown in Figure 4:
a.
Right click My computer and select Manage. In the directory tree,
navigate to Under Local Users and Groups > Groups.
Figure 4. Shows path to get to Administrators Group in
Windows 2003
b.
To add the user to the Administrators group, double click
Administrators, then select Add.
c.
Click Advanced. If prompted for username and password, use the
credentials for the domain administrator in the domain controller
(alainadmin).
d.
Click Find Now. The users from the domain will display. Add your
domain user to the group of Administrators (Figure 5), then click
OK and Apply.
Figure 5. Shows alainuser getting added to the Administrators
group of the local machine
Running WebSphere Application Server as a Windows service
© Copyright IBM Corporation 2010. All rights reserved.
Page 7 of 14
developerWorks®
3.
ibm.com/developerWorks
Add the two required user rights assignments:
a.
Click the Windows Start button and navigate to Settings > Control
Panel > Administrative tools > Local Security Policy.
b.
Select User Rights Assignment in the left window (if not already
selected) and then double-click Act as part of the operating
system (Figure 6).
Figure 6. Security setting: Act as part of the operating system
c.
Click Add User or Group. Select the user and click OK to add the
Running WebSphere Application Server as a Windows service
Page 8 of 14
© Copyright IBM Corporation 2010. All rights reserved.
ibm.com/developerWorks
developerWorks®
user to the policy (Figure 7).
Figure 7. Add the local user alainuser to the security policy
4.
Repeat the previous step to the user to the Log on as a service policy
(Figure 8).
Figure 8. Local security settings
5.
Log off Domain Admin (alainadmin) and log in as the Domain user
(alainuser).
6.
Run the WASServiceCmd utility to create the service. Earlier this year,
Running WebSphere Application Server as a Windows service
© Copyright IBM Corporation 2010. All rights reserved.
Page 9 of 14
developerWorks®
ibm.com/developerWorks
The Support Authority presented the WASService command. You can
download the utility from the Using WASServiceCmd to create Windows
services for WebSphere Application Servers Technote. Follow the
instructions to unzip the tool to the WebSphere_root/AppServer/bin
directory. WASServiceCmd.exe is a front end for WASService.exe, which
is shipped with WebSphere Application Server. The creation of a service
takes many parameters and this utility will help minimize any human
errors that can occur during service creation.
7.
Change the service to log on as the domain user. Click the Windows
Start button and navigate to Settings > Control Panel > Administrative
tools > Services.
8.
Locate the service that was created. Double-click the service, select the
Log on tab, and change the Log on as selection to This account.
Figure 9. Shows the Domain user alainuser becoming Log on as
Running WebSphere Application Server as a Windows service
Page 10 of 14
© Copyright IBM Corporation 2010. All rights reserved.
ibm.com/developerWorks
developerWorks®
The service should now be working with the domain user alainuser.
Shown in Figure 9, the log on values show AUSTINL2\alainuser. This
shows that the service is now being controlled by a domain user account.
Conclusion
This article described how the domain administrator for Windows Server 2003 can
set up a user that lives in the domain controller, and has the bare minimum user
rights, but runs the service on the local machine for WebSphere Application Server.
This consists of the domain administrator logging in to the local machine and
providing the correct rights for the domain user to run the Windows service.
Running WebSphere Application Server as a Windows service
© Copyright IBM Corporation 2010. All rights reserved.
Page 11 of 14
developerWorks®
Running WebSphere Application Server as a Windows service
Page 12 of 14
ibm.com/developerWorks
© Copyright IBM Corporation 2010. All rights reserved.
ibm.com/developerWorks
developerWorks®
Resources
Learn
• Information Center: WASService command
• The Support Authority: Take the confusion (and errors) out of creating Windows
services for WebSphere Application Server
• Video: An imated demonstration of the WASServiceCmd tool (Flash)
• Guidelines for Selecting a Service Logon Account
• The Support Authority: If you need help with WebSphere products, there are
many ways to get it
• IBM Software product Information Centers
• IBM Software Support Web site
• IBM Education Assistant
• IBM developerWorks
• IBM Redbooks
• WebSphere Software Accelerated Value Program
Get products and technologies
• Using WASServiceCmd to create Windows services for WebSphere Application
Servers
• IBM Software Support Toolbar
• IBM Support Assistant
Discuss
• Forums and newsgroups
• Java technology Forums
• WebSphere Support Technical Exchange on Facebook
• Global WebSphere Community on WebSphere.org
• Follow IBM Support on Twitter!
• WebSphere Electronic Support
• WebSphere Application Server information
• WebSphere Process Server
Running WebSphere Application Server as a Windows service
© Copyright IBM Corporation 2010. All rights reserved.
Page 13 of 14
developerWorks®
ibm.com/developerWorks
• WebSphere MQ
• WebSphere Business Process Management
• WebSphere Business Modeler
• WebSphere Adapters
• WebSphere DataPower Appliances
• WebSphere Commerce
• IBM Support Assistant Tools
About the authors
Alain Del Valle
Alain Del Valle was born in Cuba and moved to Miami, Florida in 1984. Alain
received a B.S in Electrical Engineering in 2003 from Florida International University.
He joined the WebSphere Application Server Team in 2003 in Austin, Texas and is a
senior member of the WASADM team. He leads the lab for level 2 Support.
Dr. Mahesh Rathi
Dr. Mahesh Rathi has been involved with WebSphere Application Server product
since its inception. He led the security development team before joining the L2
Support team, and joined the SWAT team in 2005. He thoroughly enjoys working with
demanding customers, on hot issues, and thrives in pressure situations. He received
his PhD in Computer Sciences from Purdue University and taught Software
Engineering at Wichita State University before joining IBM.
Running WebSphere Application Server as a Windows service
Page 14 of 14
© Copyright IBM Corporation 2010. All rights reserved.
Innovations within reach: There's a new purple
appliance in town
Frequently asked questions about the WebSphere DataPower
XC10 elastic caching solution
Skill Level: Introductory
Charles Le Vay ([email protected])
Senior Software Architect
IBM
06 Oct 2010
The IBM® WebSphere® DataPower® XC10 Appliance is a quick, easy, and
cost-effective way to add an elastic data caching tier to enhance your application
infrastructure. To help introduce you to the capabilities of this new appliance, which
combines the robust DataPower hardware appliance platform with IBM's state of the
art distributed caching technology, here are the top ten frequently asked questions
about this new product.
Each installment of Innovations within reach features new information and
discussions on topics related to emerging technologies, from both developer and
practitioner standpoints, plus behind-the-scenes looks at leading edge IBM®
WebSphere® products.
Resistance is futile
The IBM WebSphere DataPower XC10 Appliance is designed to be the drop-in
caching tier for your IBM WebSphere Application Server infrastructure. Unveiled at
IBM’s IMPACT conference in early May 2010, the XC10 appliance is a combination
of the robust DataPower hardware appliance platform and IBM's state of the art
distributed caching technology. And, like the IBM WebSphere CloudBurst™
Appliance, it is also purple, so how could you not want one?
There's a new purple appliance in town
© Copyright IBM Corporation 2010. All rights reserved.
Page 1 of 8
developerWorks®
ibm.com/developerWorks
The XC10 appliance has only been generally available since the end of June, and so
this article is meant to help acquaint you with its impressive overall capabilities by
answering some of the most frequently asked questions about the product. The top
ten things that you might want to know most about the XC10 appliance are:
1. What is an XC10 appliance?
The WebSphere DataPower XC10 Appliance is an elastic caching solution in a
box. Each XC10 appliance has 160GB of hardened memory. By design, it is easy to
install, configure, and manage.
The XC10 appliance provides a quick and easy way to integrate a caching tier into
your enterprise infrastructure. The data caching tier is inserted behind the
application server tier. The purpose of the data caching tier is to provide scalable,
fault tolerant, coherent data grids for your application server tier. Data grids are the
containers used to store cacheable application data. The XC10 appliance can store
multiple data grids. A grouping of XC10 appliances is called a collective. When a
collective contains more than one XC10 appliance, one or more replicas of the data
grids are created and distributed across a collective. Any change to a data grid is
persisted across all of the replicas of that data grid. As XC10 appliances are added
or removed from the collective, the data grids and replicas are automatically
redistributed across the collective. The monitoring, management, and placement of
data grids and replicas is key to the reliability and scalability of an elastic data grid.
2. What can the XC10 appliance do for me?
It’s all about the cache. WebSphere Application Server (along with other WebSphere
products) supports both dynamic cache based optimizations and HTTP session
persistence for performance enhancement, scalability, and high availability. In
particular, the XC10 appliance further enhances these particular optimizations by
offloading the application server cache memory requirements and disk usage for
dynamic cache and HTTP session persistence. In addition, applications using the
IBM WebSphere eXtreme Scale ObjectMap APIs can utilize the XC10 appliance
data cache to store serializable objects.
The XC10 supports three types of data grids:
• Simple data grids store simple key-value pairs.
• Session data grids store HTTP session management data.
• Dynamic cache grids store cacheable objects from applications that
utilize the WebSphere dynamic cache APIs or container-based caching.
3. Why would I want or need an XC10 appliance?
The short answer is because the XC10 appliance saves you money.
There's a new purple appliance in town
Page 2 of 8
© Copyright IBM Corporation 2010. All rights reserved.
ibm.com/developerWorks
developerWorks®
In a traditional application server environment, cache memory is contained within
each application server instance. The cache occupies the same addressable
memory space as the applications. Therefore, there is a limit to cache size. If the
cache occupies too much memory, it can actually degrade the performance of the
application. Because cache is contained within an application server instance, if you
have several application server instances configured in a cluster, then each one
contains a cache and these caches will eventually all contain the same copies of the
cached application data. The copies of the data are determined to be fresh or stale
based on communication of invalidation messages. The greater the number of
server instances, the greater amount of invalidation chatter is required to keep the
cached data fresh among the server instances. Additionally, high-speed disks or
databases (or both) are typically required to increase the size of the cache, or for
high availability.
The introduction of an elastic data grid into your enterprise infrastructure can
dramatically reduce the memory footprint required for each application server
instance. In a virtualized environment, this memory could be used to support
additional virtualized servers, thus improving the utilization of the physical hardware.
Data grids provide a single coherent cache shared by all of the application server
instances in a cluster. Therefore, the data is always fresh because all of the
application instances see the same single copy in the cache. This removes all of the
invalidation chatter in traditional clustered application server architecture and can
result in higher transactional performance. Since the primary characteristics of a
data grid are scalability and high availability, high-speed disks and databases used
exclusively for persistence are no longer needed in most cases. For simple grids,
however, a database is still required for primary storage. Additionally, because
elastic data grids scale in a linear fashion, there is virtually no limit to their size.
With a group of XC10 appliances, you can easily build very large data grids. The
larger the data grid, the more cacheable data you can store, thus significantly
reducing costly redundant transactions because the requested data is most likely
already in the very large data grid.
In summary, elastic data grids improve hardware utilization, reduce costly redundant
transactions, and eliminate the need for a high speed disk and database used for
persistence. Altogether, these improvements can result in significant cost savings.
4. Do I need to install code to use the XC10 appliance?
Yes, you will need to download the (free) WebSphere eXtreme Scale client from the
IBM Support Portal. You can either install the client standalone or it can be
embedded for integration with WebSphere Application Server. The standalone
installation supports only simple data grids. The embedded installation is required for
HTTP session data grid and dynamic cache data grid support. (The embedded
installation also supports simple grids.)
There's a new purple appliance in town
© Copyright IBM Corporation 2010. All rights reserved.
Page 3 of 8
developerWorks®
ibm.com/developerWorks
If you choose the embedded installation, you must augment your existing
WebSphere Application Server profiles once the installation has successfully
completed. Following profile augmentation, you will then be able to choose the XC10
appliance as an option for HTTP session data grid configuration or dynamic cache
data grid configuration in the WebSphere Application Server administrative console.
5. Do I need to write any code to use the XC10 appliance?
You do not need to write any additional code for HTTP session caching or for
dynamic cache support using the XC10 appliance.
For HTTP session caching, you simply configure your application to use the XC10
appliance using the WebSphere Application Server administration console.
For dynamic cache support, if your application already leverages the dynamic cache
using the dynamic cache APIs, or you use container-level dynamic cache support in
WebSphere Application Server, no additional code is required. In order to configure
WebSphere Application Server to use the XC10 appliance as the dynamic cache
provider, you must first specify a catalog service running on the appliance by
creating a catalog service domain in your WebSphere Application Server. Then, to
create the data grid on your appliance, you either run the provided
dynaCfgToAppliance script or manually create the data grid using the XC10
appliance browser-based user interface. You will need to define the XC10 appliance
as the dynamic cache provider using the WebSphere Application Server
administrative console, configure the replication settings, and finally add a topology
custom property for the cache instance you want to modify.
For simple data grids, whether using the standalone client or embedded WebSphere
Application Server client, you must add code to your application that uses the
WebSphere eXtreme Scale ObjectMap APIs to perform simple create, retrieve,
update, and delete (CRUD) operations on the simple data grid.
6. What if I need a bigger cache?
After you have realized the benefits of adding a data grid to your infrastructure by
deploying a WebSphere DataPower XC10 Appliance, you will probably want to add
another appliance for scalability and high availability. The beauty of an elastic data
grid is that it scales in a linear fashion. Therefore, by adding another appliance, you
grow your data grid capacity by another 160GB. Whenever you need more capacity,
simply add another appliance. If you need more servers to support a higher access
load, you grow your grid over multiple appliances.
To deploy an additional XC10 appliance, you must add it to a collective. The process
of adding another appliance into a collective -- appropriately termed "assimilation" -will wipe out any existing data grids on the assimilated appliance. To complete the
assimilation, the members of the collective will evenly redistribute the data grids and
There's a new purple appliance in town
Page 4 of 8
© Copyright IBM Corporation 2010. All rights reserved.
ibm.com/developerWorks
developerWorks®
create replicas across the updated collective. In a collective, any update or change
to a data grid is persisted across all other appliances in the collective containing
replicas for that data grid. (If you are familiar with the "Star Trek" television series,
then the terms collective and assimilation might also be familiar to you. Whether it's
a coincidence or not that these same terms are used to describe XC10 appliance
concepts, the analogies between the two are valid.)
7. How does an XC10 appliance fit into a highly available architecture?
High availability is fundamental to the architecture and design of data grid
technology. Of course, from a hardware perspective, you must have more than one
XC10 appliance to avoid having a single point of failure. By creating a collective of
more than one XC10 appliance, data grid replication is automatically enabled.
Data grid replication is the critical component to the self-healing nature of elastic
data grids. For each primary data grid, a replica data grid is created on an appliance
that does not contain the primary data grid. If there is a failure on the appliance that
contains the primary data grid, the replica data grid is promoted to primary data grid.
A new replica is automatically created on another available appliance in the
collective. In a failover situation, primary and replica data grids are automatically
redistributed across the remaining appliances in the collective.
The critical component of the collective responsible for high availability failover is the
catalog server. The catalog server keeps track of the location of all of the primary
and replica grids in the collective. A catalog server runs on each appliance in the
collective with a limit of three catalog servers per collective. The catalog service is
the communication between catalog servers. The catalog servers coordinate the
placement of the primary and replica grids across the collective.
For finer granularity in defining your data grid a high availability architecture, each
appliance in a collective can be associated with a physical location, called a zone.
Zones can represent physical rack location, room number, data center location, and
so on. Therefore, it is possible to associate groups of XC10 appliances in a
collective with specific locations for high availability reasons, such as power grid
failover, network failover, or data center failover. For high availability best practices,
a collective should span several zones.
Zones help the catalog service determine placement of data grids such that primary
and replica data grids are placed in different zones. If a failure occurs in one zone,
the replicas in the working zones are promoted to primaries and new replicas are
created on other appliances in the collective in the other working zones. When the
failed zone comes back online, the primary and replica data grids are redistributed
among all of the available zones by the catalog service to maximize high availability.
8. Is the XC10 appliance secure?
There's a new purple appliance in town
© Copyright IBM Corporation 2010. All rights reserved.
Page 5 of 8
developerWorks®
ibm.com/developerWorks
The XC10 appliance is absolutely secure on several levels. It is built on the
hardened, tamper-proof DataPower platform. If someone tries to open the appliance
and triggers the internal intrusion detection switch, the XC10 appliance cannot be
powered back on until it is sent back to IBM to be reset. The XC10 appliance is not a
general-purpose computing platform. It provides no access to the operating system
thru a command shell, nor does it provide any way to upload and execute user code
or user scripts. You can only upload signed trusted firmware updates to the XC10
appliance. Finally, the XC10 appliance provides fine-grained user and user group
permissions for administrative tasks and data grid security. When data grid security
is enabled, only authorized users can access data in the data grid. The XC10
appliance also supports integration with a Lightweight Directory Access Protocol
(LDAP) directory for user authentication.
9. How do I manage the appliance?
The XC10 appliance is managed using a browser-based user interface. From the
user interface, you can:
• Manage user interface security.
• Manage users and groups.
• Configure date and time settings.
• Configure e-mail delivery of user password resets.
• Configure the appliance network settings.
• Create and manage collectives and zones.
• Manage and secure data grids.
• Update appliance firmware.
• Shut down or restart the appliance.
10. How do I know what is going on within the appliance?
In addition to performing administrative tasks, the XC10 appliance user interface
provides the ability to monitor both the progress of administrative tasks and
performance of your data grids. This information helps you to determine the overall
capacity and performance of your XC10 appliance.
By selecting Data grid overview in the XC10 appliance administrative console, you
can get a high level view of:
• Used and available capacity on the appliance or collective.
• Top five data grids by average transaction time.
There's a new purple appliance in town
Page 6 of 8
© Copyright IBM Corporation 2010. All rights reserved.
ibm.com/developerWorks
developerWorks®
• Top five data grids by average throughput in transactions per second.
• Used capacity over time.
• Average throughput over time.
• Average transaction time over time.
You can also select an overview of individual data grids, which provides similar
information to the data grid overview but for a specific data grid. For further details
about a specific data grid, you can select Data grid detail reports with which you
can get both data grid and map details.
Conclusion
The WebSphere DataPower XC10 Appliance is a quick, easy, and cost-effective
way to add an elastic data caching tier to enhance your application infrastructure.
Additionally, the XC10 appliance easily integrates with your WebSphere products. It
can offload the memory and high-speed disk requirements for dynamic cache
support, eliminate the need for database storage for HTTP session state
persistence, and reduce redundant transactions. These optimizations can result in
higher transactional performance, better hardware utilization, and lower memory
footprint, altogether adding up to potentially significant cost savings.
There's a new purple appliance in town
© Copyright IBM Corporation 2010. All rights reserved.
Page 7 of 8
developerWorks®
ibm.com/developerWorks
Resources
Learn
• IBM WebSphere DataPower XC10 Appliance product information
• IBM WebSphere DataPower XC10 Appliance Information Center
• Video: IBM WebSphere DataPower XC10 Appliance Session Management
• Video: IBM WebSphere DataPower XC10 Appliance Monitoring
• IBM developerWorks WebSphere
Discuss
• IBM WebSphere DataPower XC10 Appliance wiki
About the author
Charles Le Vay
Charles Le Vay is a senior software architect. He recently joined the WebSphere
Emerging Technologies team as a technical evangelist. His current focus is on
promoting the advantages of elastic data grid technology within the enterprise.
Before becoming a technical evangelist, he was the Web Service interoperability
architect for IBM's WebSphere Application Server. He represented IBM on the Web
Service Interoperability Organization (WS-I) Reliable Secure Profile (RSP) Working
Group. As an interoperability architect, Charles focused on ensuring IBM products
meet industry standard interoperability criteria. He was responsible for identifying and
detailing best practices for Web services interoperability. Prior to this position,
Charles specialized in mobile application development, wireless technology, and
extending enterprise applications securely to mobile devices. Before joining IBM,
Charles developed advanced submarine sonar systems for the Navy and specialized
in signal processing and underwater acoustics. Charles is a graduate of Duke
University with a degree in physics.
There's a new purple appliance in town
Page 8 of 8
© Copyright IBM Corporation 2010. All rights reserved.
The WebSphere Contrarian: Change is hard, or is
it?
Skill Level: Intermediate
Tom Alcott
Consulting IT Specialist
IBM
06 Oct 2010
Changing the LDAP bind password in IBM® WebSphere® Application Server doesn’t
have to be complex and mandate an outage or interruption of service. The
WebSphere Contrarian discusses a simple pattern that can be employed to change
the LDAP bind password used by WebSphere Application Server in a simple and
easy way.
In each column, The WebSphere® Contrarian answers questions, provides
guidance, and otherwise discusses fundamental topics related to the use of
WebSphere products, often dispensing field-proven advice that contradicts
prevailing wisdom.
Changing how you feel about change
We often think of "change" as being "difficult,' and with a quick search of the Internet
using your favorite search engine, you can find a number of articles titled Why
Change is Hard or Reasons Change is Difficult, plus even a song or two on this
theme.
That said, while I too can find change daunting I wouldn’t always characterize
change as being difficult -- which probably isn’t totally unexpected given my
contrarian nature. In past installments of this column, in fact, I have discussed
change in WebSphere Application Server and the precautions to take when making
changes in WebSphere Application Server. I would like to take this opportunity, then,
to return to the subject of change in the context of WebSphere Application Server
administration, specifically a common security change task: changing LDAP
Change is hard, or is it?
© Copyright IBM Corporation 2010. All rights reserved.
Page 1 of 4
developerWorks®
ibm.com/developerWorks
passwords without downtime.
LDAP password changes
The solution for making changes to passwords entails a pattern, one that isn’t
specific to WebSphere Application Server or LDAP, but one that relies on the use of
two user IDs and passwords and has been used since antiquity -- well, computer
antiquity at least -- to change passwords that are in use between one server and
another. And because this pattern isn’t specific to WebSphere Application Server or
LDAP, it can be used for other resources as well, like databases.
The pattern is this:
1.
Create two user IDs on LDAP and give them the same permissions. To
keep this simple, let’s call the two user IDs userA and userB.
2.
When you first configure WebSphere Application Server to use LDAP,
use userA and the password for userA as the LDAP bind distinguished
name and bind password.
3.
When it’s time to change passwords -- say, in 60 days because that’s how
often your corporate security policy requires passwords to be changed -log in to LDAP and change the password for userB, at which point userB
now has a new password.
4.
After saving the new password for userB in LDAP, go into WebSphere
Application Server and update it to use userB and the password for userB
as the LDAP bind distinguished name and bind password.
5.
Save the configuration.
At this point, any servers already running are using userA and will continue to work,
and any servers that are either started or restarted will use userB; both user IDs will
work.
If you don’t want to restart all your servers, you can choose to dynamically update
the LDAP binding information, thus avoiding the need to incur a service interruption
outage when updating the LDAP password.
Remember that dynamically updating the LDAP binding information in WebSphere
Application Server does NOT eliminate the need for two IDs or the need to update
the WebSphere Application Server configuration; it just avoids the need to restart the
WebSphere Application Server processes (deployment manager, node agent,
application server). If you only employ one user ID, then the instant you change a
password in LDAP, any WebSphere Application Server still using the old password
Change is hard, or is it?
Page 2 of 4
© Copyright IBM Corporation 2010. All rights reserved.
ibm.com/developerWorks
developerWorks®
will potentially be unable to contact LDAP, which will result in an authentication
failure. You must use two user IDs to employ this pattern.
By following these simple steps, you can change passwords used by WebSphere
Application Server to access external resources, like LDAP, without a complete
service outage.
Change is easy!
Acknowledgements
Thanks to Keys Botzum for his suggestions which led to devoting this column to this
topic.
Change is hard, or is it?
© Copyright IBM Corporation 2010. All rights reserved.
Page 3 of 4
developerWorks®
ibm.com/developerWorks
Resources
• The WebSphere Contrarian
• Changing host names and migrating profiles in WebSphere Application
Server
• Less might be more when tuning WebSphere Application Server
• Information Center: Updating LDAP binding information
• Book: IBM WebSphere: Deployment and Advanced Configuration by Roland
Barcia, Bill Hines, Tom Alcott and Keys Botzum, IBM Press, 2004
• IBM developerWorks WebSphere
About the author
Tom Alcott
Tom Alcott is consulting IT specialist for IBM in the United States. He has been a
member of the Worldwide WebSphere Technical Sales Support team since its
inception in 1998. In this role, he spends most of his time trying to stay one page
ahead of customers in the manual. Before he started working with WebSphere, he
was a systems engineer for IBM's Transarc Lab supporting TXSeries. His
background includes over 20 years of application design and development on both
mainframe-based and distributed systems. He has written and presented extensively
on a number of WebSphere run time issues.
Change is hard, or is it?
Page 4 of 4
© Copyright IBM Corporation 2010. All rights reserved.
Comment lines: The challenges of introducing new
technology
Skill Level: Introductory
Andre Tost ([email protected])
Senior Technical Staff Member
IBM
06 Oct 2010
Technologies that are new to an organization present a number of issues simply
because they are new. Such issues are rarely addressed properly or sufficiently, if at
all. The lack of a formal process for introducing new technology into an IT
environment is one of the biggest challenges faced by companies looking to leverage
new products. Here is a look at how you can plan for introducing new technologies -including new software, new systems, new versions of existing software and
systems, and more -- to ensure the proper technical teams and governance
mechanisms are involved.
Introduction
I have spent a significant amount of my time over the last several years on a series
of projects across multiple industries in locations all over the world. The most
important underlying theme during this time was (and still is) the introduction and
promotion of the Service Oriented Architecture concept as a means of organizing
functionality in a decoupled, dynamic, and business-aligned manner.
For many organizations, this new concept can be rather disruptive in that it changes
the way solutions are designed, implemented, and operated. Companies have to
deal with new products and new patterns of solution design, new requirements
towards the maintenance and operation of business solutions, and new opportunities
for directly supporting the business needs in IT. However, most organizations try to
address these challenges with their existing roles, responsibilities, and processes. In
some cases, they realize too late that a more fundamental change is needed: a
change of processes, organizations, and, yes, culture.
The challenges of introducing new technology
© Copyright IBM Corporation 2010. All rights reserved.
Page 1 of 7
developerWorks®
ibm.com/developerWorks
In this article, I want to describe a common issue that I have come to identify as "the
challenge of introducing new technology" and look at the best way an organization
can deal with this challenge.
Why new technology?
This is a bit of a rhetorical question. Dealing with and leveraging new technology is a
part of our lives in IT. In fact, the pace with which new technologies emerge is
steadily increasing, and so companies that can leverage new technology quicker
than others gain a competitive advantage and are able to deliver real business value
faster.
In the context of this discussion, "new technology" includes:
• New software, specifically new commercial, off-the-shelf software
products or new middleware products.
• New systems, specifically new hardware platforms or new operating
systems.
• New major versions of existing software or hardware.
• New significant functionality leveraged in an existing hardware or software
product.
Some of these new technology types are relevant simply because of time and
related legal or contractual obligations; new versions of hardware or software must
be introduced because vendor support might otherwise be dropped. New
generations of hardware are interesting because of improved technical
characteristics (for example, faster processors, less energy consumption, and so
on).
But sometimes new technology is triggered by emerging IT trends in general. For
example:
• The advent of service orientation is an example of such a trend, as
something that went from being brand new and not very well documented
or supported in actual products, to being today what I consider the state
of the art of software architecture and development. IT organizations have
had to react to this trend, with varying speeds of adoption.
• Another trend (which happens to be something I deal with a lot) is
business process management (BPM). As the name indicates, BPM
calls for better management of processes -- for example, a higher degree
of automation, shorter cycles, better monitoring of current events -- all of
which are only possible if new technology is used that directly supports
The challenges of introducing new technology
Page 2 of 7
© Copyright IBM Corporation 2010. All rights reserved.
ibm.com/developerWorks
developerWorks®
these goals.
• A possible future trend that might lead to equally large numbers of new
technologies and products is the concept of cloud computing, just to
name one last example.
Another trigger for new technology can come from the lines of business in an
enterprise. New business requirements might arise that result in the need for
technology that an IT organization currently does not support. Examples for the
technologies that have to be introduced to the IT landscape in those cases are
portals, multi-channel architectures, or business rule systems.
A lifecycle for new technology
Despite the wide variety of new technologies (as defined above) and just as many
possible triggers for these technologies, you can still define a common lifecycle that
all technologies share. The lifecycle described in Table 1 is such an example; there
are many variations but all would include these same major aspects:
Table 1. Sample technology lifecyle
Stage
Description
Introduced
A technology or product is brought into an
organization for the first time. This is when
various groups get to take a first look at the
technology, often testing its use in the form of a
proof of concept.
Planned
Initial planning has been conducted for the
technology to go into production in support of a
business solution, which will act as a pilot. The
planning activities include operational aspects.
Deployed
The technology is in production, in a limited
fashion, supporting only one or two business
solutions.
Mature
The technology has reached a high degree of
maturity. This includes organizational
capabilities, refined processes for both
development and operational support. Known
best practices have been applied.
Retired
No new solutions can be deployed on top of the
technology. Existing solutions are migrated over
time, if possible.
Introducing a new technology
A suitable process must exist to support the lifecycle described above. Many
The challenges of introducing new technology
© Copyright IBM Corporation 2010. All rights reserved.
Page 3 of 7
developerWorks®
ibm.com/developerWorks
companies struggle with this because there is usually no defined process to perform
for handling new technology.
In reality, the effort of bringing a new technology into production is mostly tied to the
regular development and deployment lifecycle (that is, the activities that are
performed for any new business solution) and paired with the individual heroics of
experienced staff.
In particular, it has been my experience that operational characteristics are
considered late -- or not at all. These characteristics include things like monitoring,
capacity planning, problem determination, and backup/restore procedures, none of
which deal with creating a new solution, but instead deal with operating and
maintaining it. The key point to remember is that a new technology will bring new
requirements with it that didn’t exist before.
Thus, I strongly recommend establishing a process that formally defines steps that
are required whenever a new technology is introduced to a company’s IT landscape.
Within such a process, you can ensure that the operational aspects are executed -and executed as early as possible.
Describing such a process in detail is beyond the scope of this article, but it will be
helpful to look at a sample list of essential considerations that are most often
overlooked:
• Technology testing
As mentioned earlier, the introduction of new technology is often
embedded in the process of developing and deploying a business
application, and this includes all testing steps. And while these tests
include non-functional and operational testing, there are no steps specific
to the fact that new technology is being introduced.
For example, new ways of monitoring (plus appropriate alert and report
definitions) might have to be established, possibly along with using new
tools for monitoring. Or, new operational procedures might be needed to
ensure high availability. Or, performance tests have to be run to
determine initial benchmarks.
All of these aspects have to be thoroughly tested before the first
production rollout. Actually, this type of test can happen in parallel to the
actual development process because the relevant test cases do not
require that the business application already exists. The test exclusively
focuses on operational aspects that are independent of any particular
business application.
• Operational runbooks
The challenges of introducing new technology
Page 4 of 7
© Copyright IBM Corporation 2010. All rights reserved.
ibm.com/developerWorks
developerWorks®
An operational runbook is basically a document (or a set of documents)
describing how to operate a system. It captures procedures and best
practices, gives guidance and prescriptive information, and serves the
needs of operators and support personnel.
Runbooks exist in almost all IT organizations, but their creation is rarely
standardized or included as part of the normal process -- which is why it’s
on this list. Operational staff should be involved in the introduction of new
technology as early as possible, and the formal creation of runbooks is a
good way of getting an operations team acquainted with technology they
have never before operated.
• Financials
Every IT organization has an approach toward funding their activities. A
business-sponsored project will have a budget based on a business case
that balances cost against the benefit of a solution. IT teams have defined
ways of charging for their services.
While this is business as usual, the arrival of new technology brings extra
considerations with it that have to be dealt with. For example, the new
technology introduction process (described above) needs to be funded;
the first business pilot project using a new technology should not be
burdened with the extra cost of introducing it. Instead, the cost should be
spread according to expected usage per the technology roadmap.
Moreover, this factor speaks yet again to the necessity of getting
operational teams involved early so that they can better estimate the cost
of operating a new technology.
It’s all about governance
Assume for a second that a technology lifecycle and a formal process for introducing
new technology exist. What’s the next challenge? Any process is only as good as its
enforcement mechanisms, compliance criteria, and exception and escalation
procedures. In short, you need proper governance.
Being able to enforce the execution of a defined process is a challenge in any
environment. If no mechanisms exist to prevent process steps from being ignored,
then ignored they will be, because of time, budget, or other business pressures.
Defining proper governance should therefore be a core part of defining and
implementing a new technology introduction process. As with all aspects of
governance, sponsorship from senior executives goes a long way. There should be
formal compliance criteria and related checklists that gate the progression from one
The challenges of introducing new technology
© Copyright IBM Corporation 2010. All rights reserved.
Page 5 of 7
developerWorks®
ibm.com/developerWorks
part of the process to the next. Relevant reviews and audits must be embedded in
the process itself, together with appropriate role definitions (for example, separation
between the new technology owner and the new technology reviewer).
Conclusion
The lack of a formal process for introducing new technology into an IT environment
is one of the biggest challenges faced by companies looking to leverage new
products. Such a process guides a technology along its lifecycle, ensures proper
and timely involvement by technical teams (like operations), and defines proper
governance mechanisms ensuring the process is always properly executed.
The challenges of introducing new technology
Page 6 of 7
© Copyright IBM Corporation 2010. All rights reserved.
ibm.com/developerWorks
developerWorks®
Resources
• IBM developerWorks WebSphere
About the author
Andre Tost
Andre Tost works as a Senior Technical Staff Member in the IBM Software Services
for WebSphere organization, where he helps IBM customers establishing
Service-Oriented Architectures. His special focus is on Web services and Enterprise
Service Bus technology. Before his current assignment, he spent ten years in various
partner enablement, development and architecture roles in IBM software
development, most recently for the WebSphere Business Development group.
Originally from Germany, Andre now lives and works in Rochester, Minnesota. In his
spare time, he likes to spend time with his family and play and watch soccer
whenever possible.
The challenges of introducing new technology
© Copyright IBM Corporation 2010. All rights reserved.
Page 7 of 7
Comment lines: Integrating WebSphere Service
Registry and Repository with Tivoli Application
Dependency Discovery Manager
Skill Level: Intermediate
Robert R. Peterson ([email protected])
WebSphere Enablement Consultant
IBM
06 Oct 2010
Using the IBM® WebSphere® Service Registry and Repository Discovery Library
Adapter (DLA), administrators can see the Web services present in an IT
environment in the same IBM Tivoli® Application Dependency Discovery Manager
user interface with which they view other resources, applications, and systems. Here
is a high level overview of the integration possible between these two products that
could help you enhance your understanding and visibility of your overall IT
environment.
Get a better view
If you are using IBM® WebSphere® Service Registry and Repository (hereafter
referred to as Service Registry and Repository), chances are that you’re integrating
it with other IBM WebSphere products, such as IBM WebSphere Message Broker or
IBM WebSphere DataPower® SOA Appliances. But did you know that you can also
integrate Service Registry and Repository with several IBM Tivoli products as well?
For example, the status of a service in Service Registry and Repository can be
updated with IBM Tivoli® Composite Application Manager (ITCAM) for SOA, and
Service Registry and Repository can also synchronize with IBM Tivoli Change and
Configuration Management Database.
The purpose of this article, however, is to highlight how you can export metadata
about WSDL services from Service Registry and Repository and then load that
metadata into IBM Tivoli Application Dependency Discovery Manager (TADDM).
Integrating WebSphere Service Registry and Repository with Tivoli Application Dependency Discovery Manager
© Copyright IBM Corporation 2010. All rights reserved.
Page 1 of 7
developerWorks®
ibm.com/developerWorks
With information on Service Registry and Repository Web services in TADDM, an
administrator can have a holistic view of all the Web services and policies active in
their IT environments from one place: the TADDM user interface.
What is Tivoli Application Dependency Discovery Manager?
A typical data center has an array of different systems and applications, and are
used by multiple teams on multiple projects. Keeping track of the machine inventory
and the applications running throughout a data center can easily become
overwhelming. Taking things a step further, determining what dependencies and
relationships these different systems have amongst each other can be an even more
daunting task. This is where TADDM can help: Tivoli Application Dependency
Discovery Manager is designed to serve as a repository for what is in your data
center -- from switches to computer systems to applications -- and how they all
interact with each other.
TADDM has an extensive graphical interface with which you can list and query for
particular configuration items, as well as view relationships and topologies. TADDM
also maintains a change history of configuration items along with the ability to take a
"snapshot" of a version so that you can compare configuration items. With these
tools, an administrator can easily see the changes to the data center over time.
TADDM also has customization and organization capabilities for the components it
stores; for example, components can be grouped into business applications.
TADDM can be populated with information about your IT environments in several
ways. For example, TADDM can perform scans of IP subnets, during which it
intelligently discovers systems and components for you. TADDM also has a bulk
loading capability with which data can be imported in IDML (Identity Markup
Language) format. Discovery Library Adapters (DLAs) are small standalone software
components that collect data in IDML format. Several DLAs are available for Tivoli
products (like IBM Tivoli Monitoring), z/OS environments, databases, and
applications -- and also for WebSphere Service Registry and Repository, which you
will see shortly.
How the integration works
The DLA for Service Registry and Repository is shipped as part of ITCAM for SOA; it
is an independent component with its own installation package. Once installed and
configured, the DLA has the capability to scan Service Registry and Repository and
discover the services present in the repository. The DLA can be installed on an
independent machine (Figure 1) or installed locally on the same server as Service
Registry and Repository. If it is installed independently, it requires a copy of the
Service Registry and Repository client JARs so it can communicate remotely with
Service Registry and Repository. The DLA produces an XML file that contains
Integrating WebSphere Service Registry and Repository with Tivoli Application Dependency Discovery Manager
Page 2 of 7
© Copyright IBM Corporation 2010. All rights reserved.
ibm.com/developerWorks
developerWorks®
metadata about all the WSDL services it finds in Service Registry and Repository,
along with associated WS-Policy documents.
Figure 1. Overview of Discovery Library Adapter usage
To execute the DLA, use the WSRR_DLA command found in the DLA's bin directory.
For example, on Linux®, the command would be:
./WSRR_DLA.sh –r
The –r switch indicates that the IDML should refresh completely instead of listing
only recent Service Registry and Repository changes. When the operation is
complete, a file with a name similar to this is written to the DLA /staging directory:
WSRRv600-L.hostname.2010-09-13T16.05.23.449Z.refresh.xml
The Service Registry and Repository DLA can also be configured to copy the
resulting XML file via FTP or SFTP to an off-box location. The DLA can be
configured to include all WSDL services in Service Registry and Repository or,
alternatively, it can pull in only services within one or more Service Registry and
Repository classifications.
The XML files produced by the DLA conform to the IDML schema; the files produced
are often referred to as IDML books. IDML uses a common data model which
standardizes how system solutions and technologies represent resources and
relationships. TADDM can import IDML books. It adds the resources and any
relationships in the book to its repository.
To import the Service Registry and Repository IDML book into TADDM, use the load
command loadidml.sh from the TADDM server machine’s
$COLLATION_HOME/bin directory. For example, on Linux the command would be:
Integrating WebSphere Service Registry and Repository with Tivoli Application Dependency Discovery Manager
© Copyright IBM Corporation 2010. All rights reserved.
Page 3 of 7
developerWorks®
ibm.com/developerWorks
./loadidml.sh –f
\tmp\WSRRv600-L.hostname.2010-09-3T16.05.23.449Z.refresh.xml
Using these commands in conjunction with the DLA’s capability to transfer files, the
process of importing Web services from Service Registry and Repository into
TADDM can be easily scripted using (for example) simple shell scripts and UNIX®
cron processes. This enables the WSDL imports to TADDM to be automated.
After loading the book, you’ll notice new resources listed in the TADDM user
interface as Web service resources. For example, Figure 2 shows three Web
services imported to TADDM from Service Registry and Repository:
FinancialService, QuoteService, and CRService1.
Figure 2. TADDM user interface
Also notice that under the Details tab for a service, you can see the WSDL
operations debitOperation and creditOperation. Using this process, a TADDM
administrator can keep track of Web services in TADDM along with other
components and systems in their IT environment.
Some extra integration tips
Integrating WebSphere Service Registry and Repository with Tivoli Application Dependency Discovery Manager
Page 4 of 7
© Copyright IBM Corporation 2010. All rights reserved.
ibm.com/developerWorks
developerWorks®
Here are some additional points to help you with this integration:
• Logging
The log file for the Service Registry and Repository DLA can be found at
/logs/WSRRDLALog.log. Be sure to take a look at the log file after running
WSRR_DLA.sh.
• Validation
TADDM comes with a certification tool for IDML files. The tool can
validate the integrity of any IDML file, not just those generated by the
Service Registry and Repository DLA. You should run the certification tool
on Service Registry and Repository IDML books before they are imported
into TADDM. The tool consists of a JAR file called idmlcert.jar located in
/cmdb/dist/sdk/dla/validator/v2 within your TADDM installation directory.
Listing 1 shows an example of its usage along with some example output.
Listing 1. Using the IDML certification tool
java -Xmx256m -Xmx256m -jar idmlcert.jar
-verbose WSRRv600-L.hostname.2010-09-3T16.05.23.449Z.refresh.xml
CDM.xsd version=2.10.6
idml.xsd version=0.8
NamingRules.xml version=2.10.11
DL model version=2.10.6
=======================================================================
File: WSRRv600-L.nc185067.tivlab.raleigh.ibm.com.2010-09-13T16.05.23.449Z.refresh.xml
=======================================================================
Certification tool found:
19 Managed elements
32 Relationships
[PASS]
[PASS]
[PASS]
[PASS]
[PASS]
[PASS]
[PASS]
-
TEST
TEST
TEST
TEST
TEST
TEST
TEST
00
01
02
03
04
05
06
(XML Parse)
(All MEs have a valid ID)
(superior reference IDs in book)
(Attributes are valid)
(All managed elements have a valid naming rule)
(All managed elements are valid)
(All relationships are valid)
Classes used: (occurrences)
process.Document (4)
process.ManagementSoftwareSystem (1)
process.Repository (1)
soa.WSOperation (4)
soa.WSPort (3)
soa.WSPortType (3)
soa.WebService (3)
Relationships used: (occurrences)
definedUsing(soa.WSOperation, process.Document) (5)
definedUsing(soa.WSPort, process.Document) (4)
definedUsing(soa.WSPortType, process.Document) (4)
definedUsing(soa.WebService, process.Document) (4)
federates(process.Repository, process.Document) (4)
federates(soa.WSPortType, soa.WSOperation) (4)
federates(soa.WebService, soa.WSPort) (3)
invokedThrough(soa.WSOperation, soa.WSPort) (4)
Integrating WebSphere Service Registry and Repository with Tivoli Application Dependency Discovery Manager
© Copyright IBM Corporation 2010. All rights reserved.
Page 5 of 7
developerWorks®
ibm.com/developerWorks
Book passed all certification tests
Elapsed time: 5.9 seconds
The certification tool is not guaranteed to find all problems with an IDML
book, but if a problem is found the tool provides helpful debugging
information.
• Viewing
You can view all the objects that have been imported into TADDM from
the IDML book. To do this, select MSS from the TADDM Edit menu, then
scroll to the bottom and select the Service Registry and Repository DLA.
Click List CIs. You should see a list of objects similar to Figure 3.
Figure 3. WSRR objects imported into TADDM
Conclusion
With the WebSphere Service Registry and Repository DLA, administrators can see
the Web services present in an IT environment in the same Tivoli Application
Dependency Discovery Manager user interface with which they view other
resources, applications, and systems. This article provided a brief introduction to
TADDM with an overview of the integration between Service Registry and
Repository and TADDM that’s possible using the product-specific DLA, and offered
additional tips when using the Service Registry and Repository IDML book. Use the
Resources below to further investigate how TADDM can help you by enhancing your
understanding and visibility of your overall IT environment.
Integrating WebSphere Service Registry and Repository with Tivoli Application Dependency Discovery Manager
Page 6 of 7
© Copyright IBM Corporation 2010. All rights reserved.
ibm.com/developerWorks
developerWorks®
Resources
• Information Centers
• WebSphere Service Registry and Repository
• Tivoli Application Dependency Discovery Manager
• Tivoli Composite Application Manager for SOA
• Redbook: Integrating Tivoli Products
• IBM developerWorks Tivoli
• IBM developerWorks WebSphere
About the author
Robert R. Peterson
Robert R. Peterson is a solution architect for Tivoli's Integration Center of
Competency. He works to improve integration across Tivoli's Service Availability and
Performance Management (SAPM) product portfolio. Robert is an IBM Master
Inventor for his patent contributions, an IBM Press author, and a frequent conference
speaker. Visit his website.
Integrating WebSphere Service Registry and Repository with Tivoli Application Dependency Discovery Manager
© Copyright IBM Corporation 2010. All rights reserved.
Page 7 of 7
Fly UP