Wednesday, October 8, 2008

Anatomy & Physiology of Android OS

Android has been designed as a modern mobile platform that will enable applications to take full advantage of the mobile device capabilities. This session will break down the various components of the Android platform, examine how they work, and give developers a deeper understanding of the underlying technologies that drive the Android platform...

This video tutorial will give you deep understanding of how Linux Kernel 2.6.24 is stacked up inside Android OS including the different drivers,libraries and runtime interfaces which drives the Android platform and the applications which run on it....
Source: GoogleDevelopers

Thursday, October 2, 2008

EJB 3.1 Specifications Overview

Source: The Server Side.....

Kenneth Saks is the specification lead for Enterprise JavaBeans 3.1 (JSR 318) . This talk will give the latest update on the contents of the EJB 3.1 specification , which will soon be released for Public Draft Review.

With its 3.0 release, the Enterprise JavaBeans™ (EJB™) architecture underwent a dramatic simplification targeted at ease of use for application developers. The purpose of the Enterprise JavaBeans 3.1 specification is to further simplify the EJB architecture by reducing its complexity from the developer's point of view, while also adding new features in response to the needs of the community.

Topics will include : .war packaging of EJB components, improved unit testing support, portable global JNDI names, singleton beans, startup/shutdown callbacks, a simplified(no interface) local view, asynchronous session bean invocations, automatic timer creation, and more

Live Video streaming by Ustream

Visa to Develop Applications for Android, Nokia Phones

Visa wants to become a part of your mobile phone, working with Nokia on realizing mobile payments and also announcing services for Google's Android platform.

The idea of using the mobile phone as a payment device has been around for along time, but has not yet encountered widespread success. With more than 3 billion mobile devices already in the market today, though, Visa sees a big opportunity to extend its reach, according to Elizabeth Buse, global head of product at Visa.

For future owners of the T-Mobile G1 and other upcoming Android phones, Visa will at first include three services, Alerts, Offers and Locator, which will be available for download before the end of the year.

With Alerts consumers will receive what Visa calls "near real-time" notification of purchase activity, based on rules defined by the cardholder.

Offers and Locator will make it possible for users to receive targeted offers, based on for example previous purchases, and show consumers nearby locations of shops or ATMs that accepts Visa. The two functions can also be combined. Consumers would opt in to the services, only activating those they choose, and would be able to opt out at any time, according to Visa.

The services will at first be offered to Chase Visa card holders in the U.S., and Visa says it plans to add more banks later.

Visa is also developing a payment application that will enable consumers to make mobile payments with Android phones.

Visa's work with Nokia is also about making payments possible using the mobile phone. Using the Nokia 6212 classic, expected to be available starting next month, users will be able to make contactless payments, remote payments, money transfers, as well as receive alerts and notifications, according to Nokia.

What makes all that possible is built-in support for a technology called NFC (Near-Field Communications), which lets consumers simply wave the phone within a few inches of a special point-of-sale reader to complete a transaction.

Nokia and Visa will first do trials with financial institutions, but for it to really take off the retail sector has to get onboard, and that is currently a blocking point, according to Richard Webb, directing analyst at Infonetics Research.

"Mobile payments are a good thing for the mobile sector, but there is no real gain for the retail sector, which would have to upgrade its systems for payments to work," said Webb.

Companies such as Nokia and Visa have to explain what's in it for retailers, but there are also other aspects that need to be addressed before mobile payments can take off, including security and trust, according to Webb.

Tuesday, May 13, 2008

Business Process Management- Quick look at three more products

Business Process Management ,the subject or concept which is gripping every major enterprise applications deployment now-a-days coupled with SOA/Web Services, has been part of my pedigree since more than a year now and the more I tend to learn about it, the more I feel like a naive,its such a vast concept . Many people have asked me how do you define a "Business Process Management"? and I tend to come up with different answers (not my fault...because each of the BPM solution providers have their own definition and meaning to it.

First thing first, as part of BPM initiatives so far I have been exposed to different products like Jboss JBPM(I still feel this is the best one,as far as IDE based BPM tools go,next best Intalio) , Albpm , Savvion & Intalio.

Currently Iam working on an enterprise application using ALBPM(Aqualogic BPM) from BEA as the BPM product and good thing is adopted by many leading companies and the thing which sometimes worries me about it is very little documentation and support from BEA(thats my perception) and coming from IDE background primarily for most of the development ,I feel it is behind Jboss Jbpm or Intalio because these products does support IDEs like Eclipse,which definitely makes my life easy as a Developer,Well BEA may argue with me that ALBPM Studio 6.0 has development environment similar to Eclipse or rather built on Eclipse IDE,but then Iam using ALBPM Studio 5.7 right now which doesnt come with full IDE support ...But I must say among commercial solutions this fits the bill...

And over the year,one thing which has worried me as a BPM Developer is how best I can make the Business Analysts or the business folks,understand the business processes which I have developed better,because in hindsight we expect them to be not technical savvy to really understand the workflow pattern being used nor how it has been developed since their major priority is how best this workflow/application fits their business needs .....For Example,l to overcome this situation,in one of my assignments while using Jboss Jbpm,I had to develop an EXE file or installer (something like Java Web Start)for BA and Testers,which simulates the workflow processes being developed without them having to worry about seeing it in action in a IDE ..And then as part of my curiosity and research,I realized there are different BPM solutions/products out there in IT industry that are so easy to understand ,that even non-technical folks can understand.And with this article Iam going to introduce such BPM products because most of them are browser based and not IDE based ...

Today we will be looking at 3 different BPM products that have been adopted by noted companies and how best they are serving the needs for workflow management and development.I will be giving an overview about each one of them,their features and benefits
I) Appian : Accidently I came across this product and it is definitely worth trying in any business process management effort. And when I see its customers list who have adopted Appian-Enterprise-based business process management (BPM) solution for their IT department it definitely must have some reason to it.Then I realized the advantages/benefits it brings to the table.From its legacy in the portal/knowledge-management (KM) space, Appian has built its functionality into a full blown BPM suite. In turn, it's built on a Java base following the Unified Modeling Language (UML) and the XML Process Definition Language (xPDL). (These are solid standards all, and which seem to be doing what standards should do without the interference of the European Union and Document Freedom Day groupies--. From an open source perspective, Appian comes with JBoss out of the box but of course works with WebSphere and WebLogic. It makes use of Lucene search engine as well. Not open source but Appian’s doing some real interesting stuff with KX Systems Kdb.

But the good news is that users don’t have to worry about all that technology stuff . Appian’s Form and Rules Designers and other real-user-facing components all work vis a straight Web interface (no need for Flash, plug-ins, etc.). That’s important for security requirements, which is key to many of Appian's government customers. But it is also good for ease of use for any customer. More important are the ease of Appian BPM implementation templates developed over the 10 years since it was founded. Examples are available for procurement in federal government, for wealth management with rules for credit scoring, a program with Instill to build a quality management solution for the food/service industry... I think the best place to know about Appian would be seeing its recorded webinars

II) Cordys
Cordys BPMS is a single toolset, built from the ground up to offer comprehensive BPM and SOA capabilities, giving business managers direct control over new and existing processes.And I you see the below listed features,Iam sure everyone will be curios to use and test it for their BPM efforts.And when I see that a utility which I use frequently for my business needs "WebEx", the leading provider of Web communication services has adopted Cordys,I gave it a try.
Features & Benefits :
a)Graphical, browser-based interface
* Accurately draw executable business processes using a Visio-like application
* Bridge the gap between business and IT
* Enable business users to control and quickly change their own processes
* Consolidate and present data from disparate sources as one unified and personalized workspace for higher productivity
b)Intuitive, drag-and-drop business process execution, with virtually no coding
cDistributed, fault-tolerant, and scalable architecture
* Configure nonstop, fault-tolerant runtime environments with no single point of failure on commodity hardware
d)Real-time alerts and notifications
* Accelerate responsiveness to critical events and exceptions
e)Operational intelligence dashboard
* Obtain real-time, enterprise-wide visibility of business process performance and business metrics
f)Historical analysis
* Discover enterprise performance trends for more-informed decision making
g)Composite application framework
* Create Web 2.0 interfaces quickly by visualizing, combining, and manipulating data from disparate sources
h)Composite application developer
* Create needed business services
i)SOA Grid
* Connect incompatible systems together, allowing them to communicate
* Rapidly assemble composite objects based on a variety of previously non-interoperable backends
* Govern and manage Web services, both in design and run time
j)Data manager
* Facilitate template-based reconciliation of differences among disparate data connected to the Cordys platform
k)Secure file transfer
* Provide end-to-end document security, including legal grade non-repudiation and guaranteed integrity of transmitted data

III) Lombardi
Last but not the least is Lombardi.Best place to know about this product is its resource section
Major customer which has adopted Lombardi's Teamworks BPM software is Wells Fargo Financial.

I hope some of the organizations start looking at adopting at either one of these for their BPM implementations,considering how easy they can be understood and used by workflow developers,business folks and Business Analysts.

Monday, February 18, 2008

Developing Secure Web Services

Web Services,SOA,everywhere.Well,it certainly has so much positive impact on current IT community and proving its worth being adopted by leading industry players such as IBM, HP, Oracle, Microsoft, Novell, and Sun for different Web Services products.Yes Iam talking about SOA-Web Services which are here to stay and look set to dominate the deployment of major enterprise applications in the coming years.

IBM’s definition of Web Services states that “Web Services are self-contained, modular applications that can be described, published, located, and invoked over a network, generally, the World Wide Web.” In one of my previous articles,Web Services Patterns we looked at the advantages provided by SOA applications.
When the definition refers to Web Services being invoked over the World Wide Web, it means that they use HTTP as the transport layer and an XML-based message layer. However, Web Services do not actually require HTTP—XML-formatted data can be sent over other transport protocols (message queuing, for example), which may be more suited to mission-critical transactions.

Web Services generally uses the HTTP and SSL ports (TCP ports 80 and 443, respectively) in order to pass through firewalls. In the early days of “Web Services,” vendors would say that their products were “firewall compliant.” This meant that firewalls would not block the Web Services traffic, whereas CORBA traffic attempting to use CORBA-specific ports may be blocked. Web Services make it easier to deploy distributed computing without having to open firewall ports, or having to “punch a hole in the firewall” as network administrators like to say. This “under the radar” deployment has serious security implications. Most firewalls are unable to distinguish Web Services traffic, traveling over HTTP and SSL ports, from Web browser traffic.
The word “Services” in Web Services refers to a Service-Oriented Architecture (SOA). SOA is a recent development in distributed computing, in which applications call functionality from other applications over a network. In an SOA, functionality is “published” on a network where two important capabilities are provided— “discovery,” the ability to find the functionality, and “binding,” the ability to connect to the functionality. In the Web Services architecture, these activities correspond to three roles: Web Service provider, Web Service requester, and Web Service broker, which correspond to the “publish,” “find,” and “bind” aspects of a Service-Oriented Architecture.

Web Services security focuses on the application layer, although security at the lower layers remains important. The implementation technologies on which we focus are HTTP and SOAP, although we will keep SMTP security in mind also since SOAP can be bound to SMTP as well as HTTP.It may not seem immediately obvious why security for SOAP presents such a challenge. After all, SOAP is generally bound to HTTP, which already has SSL for authentication and confidentiality. In addition, many Web authorization tools already exist. It is a reasonable question to ask why these aren’t enough, and the answer is made up of a number of reasons.The first reason is that, although frequently bound to HTTP, SOAP is independent of the underlying communications layers. Many different communications technologies can be used in the context of one multi-hop SOAP message; for example, using HTTP for the first leg, then SMTP for the next leg, and so forth. End-to-end security cannot therefore rely on a security technology that presupposes one particular communications technology. Even in the case of a single SOAP message targeted at a Web Service, transport-level security only deals with the originator of the SOAP request. SOAP requests are generated by machines, not by people. If the Web Service wishes to perform security based on the end user, it must have access to authentication and/or authorization information about the end user on whose behalf the SOAP request is being sent. This is the second reason for Web Services security.

SOAP is a technology used to enable software to talk to other software much easier than was previously possible. End users (that is, humans) do not make SOAP messages themselves. However, if access to the Web Service is to be decided based on the information about the end user, the Web Service must have access to the information that allows it to make this authorization decision. This information does not have to include the end user’s actual identity.How can this information about the end user be conveyed to the Web Service? Session layer or transport layer security between the application server and the Web Service doesn’t convey information about the identity of the end user of the Web Service. It merely conveys information about the application server that is sending the SOAP message. It may be the case that many of the requests to the Web Service originate from that application server.This challenge is addressed by including security information about the end user in the SOAP message itself. This information may concern the end user’s identity, attributes of the end user, or simply an indication that this user has already been authenticated and/or authorized by the Web server. This information allows the Web Service to make an informed authorization decision.This scenario is likely to be widespread where many Web Services are used to implement functionality “behind the scenes.” It shouldn’t be the case that the end user has to reauthenticate each time a SOAP request must be sent on their behalf. The challenge of providing this functionality is sometimes called “single sign-on” or “federated trust.”

WS-Routing provides a means for SOAP messages to route between multiple Web Services. WS-Routing defines how to insert routing information into the header of a SOAP message. This routing information can be thought of as equivalent to routing tables that operate at lower layers of the OSI stack for routing IP packets.WS-Routing means that one SOAP message may traverse multiple SOAP “hops” between the originator and the endpoint. The systems that implement these hops may have nothing in common apart from the ability to parse and route a SOAP message.When routing between Web Services, the requirement for confidentiality can apply from the originator through to the final SOAP endpoint. It may be a requirement that information be kept secret from SOAP intermediaries. There may be a chance that intermediaries may disclose the information either deliberately or through leaving “gaps” between one transport-level security session and the next. While the data is decrypted, it is vulnerable. This is the same problem that plagued the first release of the Wireless Access Protocol (WAP), in which data was decrypted in between the wireless encryption session and encryption on the fixed wire. This so-called “WAP gap” caused a loss of confidence in WAP security and was addressed in later releases of the WAP specification. Implementing encryption only at the transport level makes a “SOAP gap.”It is often noted that most security breaches happen not while data is in transit, but while data is in storage. This is the principle of least resistance—attempting to decrypt eavesdropped encrypted data from an SSL session is much more difficult than simply testing if a Web site maintainer has remembered to block direct access to the database where the credit card numbers are stored. If decrypted data is stolen from a database, the consequences are no less dramatic. Once data has reached its final destination, it must be stored in a secure state. Confidentiality for a SOAP transaction should not involve simply chaining instances of confidentiality together, since “SOAP gaps” of unencrypted data are available between each decryption and encryption.

Web Services Security Specifications
Confidential information in a SOAP message should remain confidential over the course of a number of SOAP hops.A number of industry specifications have been developed for this purpose. These specifications can be organized into two distinct categories:A standardized framework to include XML-formatted security data into SOAP messages.Standards for expressing security data in XML format. This security information should be used for the high-level principles of security: confidentiality, authentication, authorization, integrity, and so forth.

WS-Security has emerged as the de facto method of inserting security data into SOAP messages. Work on WS-Security began in 2001, was published by Microsoft, VeriSign, and IBM in April 2002, and was then submitted in June 2002 to the OASIS standards body in order to be made into an industry standard. WS-Security defines placeholders in the SOAP header in order to insert security data. It defines how to add encryption and digital signatures to SOAP messages, and then a general mechanism for inserting arbitrary security tokens. WS-Security is “tight” enough to present the definitive means of including security data into SOAP messages, but is “loose” enough to not place limits on what that security data can be.

XML Encryption
XML Encryption is a specification from the W3C. It provides not only a way of encrypting portions of XML documents, but also a means of encrypting any data and rendering the encrypted data in XML format. XML Encryption makes encryption functionality easier to deploy.XML Encryption is not a replacement for SSL. SSL is still the de facto choice for confidentiality between two entities that are communicating using HTTP. However, if the security context extends beyond this individual HTTP connection, XML Encryption is ideal for confidentiality. The capability to encrypt XML is nothing new, because XML is just text after all. However, the ability to selectively encrypt XML data is what makes XML Encryption so useful for Web Services. Encrypting an entire SOAP message is counterproductive, because the SOAP message must include enough information to be useful—routing information, for example. Selectively encrypting data in the SOAP message is useful, however. Certain information may be hidden from SOAP intermediaries as it travels from the originator to the destination Web Service.XML Encryption does not introduce any new cryptography algorithms or techniques. Triple-DES or RSA encryption may still be used for the actual encryption. XML Encryption provides a way to format the meta-information about which algorithm was used, and when the encryption occurred. This aids the Web Service in decrypting the data, provided the decryption key is available to it. This is important, because prior to XML Encryption the only standardization of encryption data was for e-mail messages (that is, S/MIME). If an organization wished to send encrypted data to another organization, both organizations would have to agree on the format of the encrypted data, how and which algorithms to use, and possibly also how to send an encrypted key. Now that information can be contained in an XML Encryption block.

XML Signature
XML Signature is a specification produced jointly by the W3C and the Internet Engineering Task Force (IETF). Like XML Encryption, it does not only apply to XML. As well as explaining how to digitally sign portions of an XML document, XML Signature also explains how to express the digital signature of any data as XML. As such, it is an “XML-aware digital signature.” PKCS#7 is a means of rendering encrypted data, and signed data, which predates XML Signature and XML Encryption. Rather than using XML, it uses Abstract Syntax Notation number 1 (ASN.1). ASN.1 is a binary format, renowned for its complexity. Producing or verifying a PKCS#7 signature requires not just cryptography software, but also an ASN.1 interpreter. XML Signature also requires cryptography software, of course, but an XML DOM replaces the ASN.1 interpreter.
The power of XML Signature for Web Services is the ability to selectively sign XML data. For example, a single SOAP parameter passed to a method of a Web Service may be signed. If the SOAP request passes through intermediaries en route to the destination Web Service, XML Signature ensures end-to-end integrity.WS-Security describes how to include XML Signature data in a SOAP message. An important feature of XML Signature is that it can be very selective about what data in an XML instance is signed. This feature is particularly useful for Web Services. For example, if a single SOAP parameter needs to be signed but the SOAP message’s header needs to be changed during routing, an XML Signature can be used that only signs the parameter in question and excludes other parts of the SOAP message. Doing so ensures end-to-end integrity for the SOAP parameter while permitting changes to the SOAP’s header information.

Security Assertions Markup Language (SAML) provides a means of expressing information about authentication and authorization, as well as attributes of an end user (for example, a credit limit) in XML format. SAML data may be inserted into a SOAP message using the WS-Security framework. SAML is used to express information about an act of authentication or authorization that has occurred in the past. It does not provide authentication, but can express information about an authentication event that has occurred in the past; for example, "User X authenticated using a password at time Y.” If an entity is authorized based on the fact that they were previously authorized by another system, this is called “portable trust.” SAML is important to address the challenge of multihop SOAP messages also, because separate authentication to each Web Service is often out of the question. By authenticating once, being authorized, and effectively reusing that authorization for subsequent Web Services, single sign-on for Web Services can be achieved.Note that this information in a SAML assertion may not indicate the end user’s identity. The user may have authenticated using a username and password, and the administrator of the Web site may have no idea of the user’s actual identity. It may simply be an indication that the user presented credentials and was authenticated and authorized. SAML allows information to be placed into a SOAP message to say “this person was authorized according to a certain security policy at a certain time." If the recipient of this SOAP message trusts the issuer of the SAML data, the end user can also be authorized for the Web Service. This SAML data is known as an “assertion” because the issuer is asserting information about the end user. The concept of security assertions has existed before SAML, and is already widely used in existing software.

XML Access Control Markup Language (XACML) is designed to express access control rules in XML format. Although the two technologies are not explicitly linked, XACML may be used in conjunction with SAML. An authorization decision expressed in a SAML assertion may have been based on rules expressed in XACML.

Microsoft’s Passport technology takes a different approach to single sign-on. The user authenticates to the passport infrastructure, either directly through www.passport .com or through an affiliate site that makes use of functionality provided by Once the user is authenticated and authorized by Passport, their authentication status is also available to other Web Services that use Passport. Like SAML, this provides single sign-on. However, the model is different, relying on a central point of authentication rather than SAML’s architecture where authentication happens at an individual Web Service. By being implemented at the site of the Web Service itself, SAML authentication and authorization information may be based on role-based security. Role-based security means that access to resources is based on the user’s organizational role; for example, in a medical setting doctors may have access to certain information while nurses have access to different information.

In this article(from Web Services Security),I tried to just give an overview on the different specifications that can be used to achieve Web Services security,for a detailed understanding,I suggest get a good book like this one,"Web Services Security",to whom I owe these excerpts,and get a clear understanding as to implementing and using specifications for securing the web services you develop.And thanks to ,Mark O'Neil,for his inputs and allowing me to use some stuff from his book.

Recommended books :

Suggested Reading

WS-Security Specification

Web Services Standards & Specifications

Web Services Security-by IBM

Implementing Service Firewall Pattern

Suggested Video Tutorial

Secure and Reliable Web Services-by InfoQ

SAML-An Overview

Web Services Attacks & Defense Strategies

Thursday, February 14, 2008

Different Design Patterns for Web Services-An Overview

Web Services provide an important building block for integrating disparate computing platforms and, indirectly, provide a mechanism to integrate their global value chains. You can build Web Services after the system was originally deployed, making them similar in many ways to today's EAI software, but you can also build them along with new software as the open Application Programming Interface (API) to the application.

Advantages of using Web Services
Using Web Services , you can build an API which islanguage- neutral and platform-neutral format, programmers can access data from one system and quickly move it to the other through the Web Service. There are several strengths to this approach:

  • Programmers can write the data-transfer programs in any language or platform with which they are comfortable.

  • The source and target systems can control the requests and updates of data in such a way that they do not interfere with a running system.

Consider the number of ways that a simple problem, such as notifying interested parties of a change to an object's state, are solved with a platform such as Java 2 Standard Edition. Some developers may use an intermediate file to track changes, with interested parties reading the file to find out when and how an object changed. Other developers may construct a point-to-point notification system, or even a set of one-to-many Publish/Subscribe patterns. Some developers may have one type of naming convention for the adding and removing of listeners; other developers may not have any naming convention for the same operations-these are some of the areas where Web Services can be highly useful

Using Design Patterns
Patterns can be applied to any portion of the software cycle which usually involve gathering requirements, creating the architecture, designing the software, and implementing it.Thousands of software design patterns document the common problems encountered by software designers and generic solutions for those problems. For example, an architect may give a system structure that identifies a point in the system where an object change drives listeners to make changes in their own state, or the change kicks off a business process. In these cases, a designer can look and determine that the Publish/Subscribe or the Observer pattern can fulfill the requirements and constraints an architect put on the design. Once identification of patterns is complete, the generic structure given in a pattern drives the design of the particular system structure.

Web Services Design Patterns
The following patterns look at how Web Services implement the service-oriented architecture, how service implementations interact with the Web Service environment, and how to locate and use Web Services:
Service-Oriented Architecture:
The Web Services environment is an architecture implementation known as the service-oriented architecture. There are several different implementations of the service-oriented architecture with Web Services having the most penetration in the industry to date. Implementations of service-oriented architectures stress two attributes: implementation transparency and location transparency. Implementation transparency requires a common system structure that applies equally to all possible underlying service implementations and a neutral mechanism for describing services. Location transparency requires the use of agnostic interfaces.
Architecture Adapter: This pattern expands on the GoF Adapter pattern. Although the GoF Adapter pattern resides in object-oriented programming as a way to adapt an exposed interface from a component to an expected dependent interface from another component, the Architectural Adapter pattern is responsible for allowing two completely separate architectures to interoperate.
Service Directory: In statically bound systems, and even in many dynamic systems, companies assume that their choices for the purchaser of software are the right choices. The Web Service paradigm challenges this tradition. Instead, by creating detailed metadata about a service, a service user should be able to locate your service and use it without application modification. The metadata in a service-oriented architecture includes its interface, location for binding, mechanism for communication, and information about the business that created the service. This pattern goes into depth on the Service Directory patterns that are inherent in the leading service architectures and that you will encounter in Web Services.
Business Object: This pattern discusses the typical structure and contents of a single business object. Although the frequency that you will use a single business object for deployment is low, there are substantial lessons you can learn from the exercise of deploying a business object. As with the first three patterns, this pattern is heavy in discussion around the Web Service environment and lessons you can learn from deploying relatively simple objects.
Business Object Collection: In business, you will rarely find business objects that are not collected. Like the business object itself, handling collections with Web Services yields substantial instructional substance as you learn more about the Web Service environment.
Business Process (Composition): Business systems today revolve more around business processes than around supporting business objects. A business process does not necessarily correlate to a single business object but is more abstract in nature. This pattern looks at business processes and lays a general framework for exposing them as Web Services. The business process is also a form of composition. To achieve a business process, multiple business objects and, often, other business processes and activities must run.
Asynchronous Business Process: A world where all business processes are synchronous would be a fine world to live in for programmers. Unfortunately, most important business processes are not synchronous. Even the most basic business processes, such as fulfilling a book order, run into asynchronous complexities. In introducing the Asynchronous Business Process pattern, you will find many similarities to the relationship between business objects and business object collections
Event Monitor: Often, the burden of determining when events occur in a service lies with the client. There are a variety of reasons for this, such as the service not having a reasonable publish/subscribe interface or the client desiring control of the event determination. This is a common, and relatively simple, design pattern to implement that has well- established roots throughout software history.
Observer: Rather than leaving a client to determine when data changed on a server, it is often more efficient to have the server component tell interested clients when data changes. This is especially true when the server component has a low frequency of updates compared to the frequency that clients will want to check. The Observer pattern formalizes the relationship between one or more clients and a Web Service that contains interesting state. The Web Service delivers events to interested clients when an interesting change occurs. The Gang of Four documented the Observer pattern. This implementation is similar to the original documentation of the pattern, with necessary information about Web Services.
Publish/Subscribe: The Publish/Subscribe pattern [Buschmann] is a heavily used pattern in EAI software as well as in many distributed programming paradigms. The Publish/Subscribe pattern is interesting in the context of the definition of Web Services as application components. Using a topic-based mechanism common in very loosely coupled architectures, you create a stand-alone event service that is, in effect, an application component. The event service forwards published events to subscribers without awareness of the application components that use the event service.
Physical Tiers: Throughout the book and the sample implementations in the chapters, you will use a simple Java-based deployment mechanism built into Apache Axis. Therefore, your components live entirely within the process space that Apache Axis uses. This is not an optimal model for enterprise applications. The model discourages runtime reuse and creates a larger footprint than is necessary. Further, the event patterns produced some interesting challenges for a Web Service environment. A client interested in events from a Web Service often exists in its own process. This pattern discusses Web Service implementations that must, and often should, communicate to other processes for their implementation behavior.
Faux Implementation: One of the most fascinating pieces of the Internet is the ability of someone or something to be able to pretend to be something they are not and actually get away with it. As long as an interface and the behavior of a service implementation is what others expect, there is no way to tell what drives the behavior of the service implementation. The Observer and Publish/Subscribe patterns require clients to implement a Web Service to receive event publications. The Faux Implementation pattern shows that as long as the behavior fulfills the contract, there is no reason you have to implement a service with traditional mechanisms
Service Factory: Class factories are common in Java programming. A class factory gives a mechanism to bind to a class implementation at runtime rather than compile time. The same capability is possible with service implementations. For example, there is no reason that a company must use a single package shipper for all shipments. Instead, the service factory illustrates how your application can determine what service to use at runtime.
Data Transfer Object: The Data Transfer Object pattern originated with Java 2 Enterprise Edition (J2EE) patterns. When you move from a single process application to a distributed application, calls between participants in the distributed architecture become more expensive in terms of performance. By giving clients mechanisms to get groups of commonly accessed data in single operations, you can streamline clients and lower the number of accesses necessary to your Web Service.
Partial Population: The Data Transfer Object pattern passes fully populated data structures between programs. This is a great paradigm but creates a proliferation of data structures and relegates the service implementation to determining what the most likely groups of accessed data will be. Partial population takes a different approach to data transfers; it allows clients to tell the server what parts of a data structure to populate. In this way, you can lighten the burden on the communication mechanism as well as the query in the server object. This technique is especially useful for services that contain complex, nested data structures (not something you will typically find in a Web Service environment).
Suggested Reading

Patterns for Service Oriented Architecture

Web Services Integration Patterns-Part 1

Web Services Integration Patterns-Part 2

Enterprise Integration Patterns

Suggested Video Tutorial

Developing SOA applications

Real World Web Services

Web Services Middleware

Web Services Overview

Thats All for now !!

Tuesday, February 5, 2008

Performance Management of Java Applications

Today I would like discuss on Performance Management of Java applications and handling memory leaks in Java. Having spend some unnerving moments and time myself ,related to handling memory leaks (OutOfMemoryError) issues in one the enterprise application being already in production in a clustered environment recently,I was faced with having long meetings and discussions with the customer and the people in the higher hierarchy level at the company ,to fine tune their application which was the source of generating more revenue and profits to them and as a Senior Consultant , I was looked upon as source of providing guidance and technical tips to achieve that.And thats when I realized that not always the old paradigm that while using Java we need not worry about allocating and freeing memory of objects holds true...And here Iam not giving you an overview on the steps or the code I followed & wrote,better you learn yourself,but rather an overview on the tools you can use to manage and fine tune Java applications. To start with let us understand the term Garbage Collection in Java and in the process of submitting the proof of concept to the customer,the tools that can be used for handling such scenarios ,as part of my R&D I came across the tools mentioned below which will certainly help Java developers in fine tuning their applications and my personal favorite is YourKit Java Profiler.

Garbage Collection in Java
In Java ,you create objects, and JVM takes care of removing them when they are no longer needed by the application through a mechanism known as garbage collection.The job of the garbage collector is to find objects that are no longer needed by an application and to remove them when they can no longer be accessed or referenced. The garbage collector starts at the root nodes, classes that persist throughout the life of a Java application, and sweeps though all of the nodes that are referenced. As it traverses the nodes, it keeps track of which objects are actively being referenced. Any classes that are no longer being referenced are then eligible to be garbage collected. The memory resources used by these objects can be returned to the Java virtual machine (JVM) when the objects are deleted...And how best can you do that? Below is the brief synopsis of the tools you can use in Java for performance tuning.

Tools you can use for handling memory leaks and performance management

Visual Garbage Collection Monitoring tool
The visualgc tool attaches to an instrumented HotSpot JVM and collects and graphically displays garbage collection, class loader, and HotSpot compiler performance data.


The jstat tool displays performance statistics for an instrumented HotSpot Java virtual machine (JVM). The target JVM is identified by its virtual machine identifier, or vmid option described below.

Note:This utility is unsupported and may not be available in future versions of the J2SE SDK. It is not currently available on Windows 98 and Windows ME platforms.

jmap prints shared object memory maps or heap memory details of a given process or core file or a remote debug server.This utility is unsupported and may or may not be available in future versions of the J2SE SDK.


HAT is a program that analyzes a heap dump file for Java programs. This tool can help a developer to debug and analyze the objects in a running Java program. It is particularly useful when debugging unintentional object retention. Starting with Java SE 6, HAT has been replaced with jhat, which is included with the standard Sun distribution. HAT is not being maintained in a stand-alone configuration. The Heap Analysis Tool (HAT) helps to debug unnecessary object retention (sometimes called "memory leaks") by providing a convenient means to browse the object topology in a heap snapshot, which is generated by the Java VM. HAT reads a hprof file, then sets itself up as a web server--therefore allowing you to run queries against a heap dump contained within the hprof file.For further information, read this article


JProbe is an enterprise-class Java profiler providing intelligent diagnostics on memory usage, performance and test coverage, allowing developers to quickly pinpoint and repair the root cause of application code performance and stability problems that obstruct component and integration integrity. With JProbe’s intuitive, unified UI, it’s easier to navigate and configure all JProbe analysis tools.JProbe also provides a powerful filtering mechanism for controlling the data display, including nine different metrics for sorting and coloring data for clutter-free, easier viewing.

Yourkit Java Profiler

The best part about Yourkit Java Profiler is , it does integrate seamlessly with different IDEs like Eclipse,Intellij etc and provides full support for Java 5 and Java 6.For the list of benefits and features,this great tool provides ,read this.

And thats all for now,say whatever you want to and any other tools that can be used for handling memory leaks in Java.

Rational Purify
IBM Rational Purify is a runtime analysis solution designed to help developers write more reliable code. Reliability is ensured via two crucial functions: memory corruption detection and memory leak detection.For further understanding about using Rational Purify , read this article

Suggested Video Tutorial

Maintaining Java Apps in Production Environment- by Alexandre Rafalovitch at InfoQ

Thursday, January 31, 2008

SOA Testing-A look at products you can use

SOA is such a buzzword since late 2006 ,that it made me sit up and realize its virtue & importance in Enterprise application development.And having been involved with SOA for a while now,like implementing couple of applications,somewhere down the line I was thinking about how best I can test my SOA application put together in enterprise integration.And as expected by Gartner that by mid 2009 ,nearly 80% of the enterprises development and integration budgets & effort will be devoted to SOA applications alone,that doubled my efforts to get to know about some products and tools for effectively(both unit & integration) testing my SOA applications suite,be it the front end,services layer ,message exchange(ESB layer) and database ,thats when I came across "LISA",the product I would like to share with everyone and give a brief overview.

iTKO LISA,The LISA 4 Complete SOA Test Platform is a comprehensive testing solution that reduces the effort and cost of SOA test creation and maintenance, while allowing your team to drastically improve quality levels over time through test reuse.LISA will carry developers, QA teams and business analysts from unit testing, to regressions, functional testing, end-to-end integration, load testing, and monitoring after deployment.LISA is a complete no-code software testing solution that supports the entire team's development lifecycle for dynamic web apps, web services, ESB messaging layers, enterprise Java, .NET, data and legacy objects, and more.

Why you need LISA?

To deliver quality SOA applications, enterprises need to achieve Complete, Collaborative, and Continuous TM SOA testing (The "Three C's"):

  1. Complete testing of business workflows across every heterogeneous technology layer of the SOA, at both a system and component level.
  2. Collaborative involvement of the whole team in quality. Enable both developers and non-programmers to define and share test cases that prove how the SOA meets business requirements during the entire application lifecycle.
  3. Continuous validation of a changing SOA deployment as it is consumed at runtime, ensuring that business requirements will continue to be met as the system dynamically changes.
LISA was built to specifically meet the above criteria.
  • Continuous testing at every phase. LISA can be used to automate unit and functional tests early in development, then leverage the same tests in regression builds, load tests and performance monitors that run over time. In addition to typical test data such as pass/fail or response times, LISA gives you real functional validation every time you test. So LISA gives you "black box" testing, "white box" testing, and every color "box" in between.
  • Every technology. LISA tests web applications, but can go way beyond "screen deep" in validating whether or not they are working as planned. In the same test case as your web apps, you automatically gain the protocols to directly instrument web services, messaging layers, databases, J2EE servers and more, without coding. And LISA's extensibility to test your custom apps is second to none.
  • At any scale. LISA supports incredibly complex, multidimensional test workflows with drastically reduced effort and very low system overhead. Using LISA Load/Performance Server, you can schedule your tests to automatically run at any time interval, or trigger them with any system event. And LISA's efficient load engines can simulate hundreds, or 100s of thousands of users, with full functional validation and reporting of every transaction.
There are three different versions of LISA,here is an overview of each of them
1)LISA Enterprise Edition SOA Testing -This edition provides provides a test client to directly invoke and verify business requirements at the service component level, at the integration layer, and across the entire workflow that makes up a business process.Using this,you can do functional testing of your web user interfaces and the data residing below the UI.And it does provide support for ensuring interoperability, predictable project delivery and end-to-end quality for all major integration platforms, including IBM MQ-Series, TIBCO, webMethods, Oracle FUSION, Sun JCAPS, Sonic MQ, Fiorano and other leading providers along with Business Process validation.
2)LISA Server-LISA Server automates and schedules LISA test cases and suites, providing sophisticated staging, user simulation and continuous build and performance test orchestration capabilities for a constantly evolving SOA application environment.LISA allows for testing individual components, process, and workflows during design and development, during integration, and in deployment. Capability for Individual functional tests and system-wide business processes load testing.
3)LISA Extension Kit(LEK)-Many complex enterprise applications, even those that are based on open standards, are built within a custom framework or developed on non-supported platforms that LISA does not yet know about. Using LEK allows LISA users to test custom systems just as natively - without writing test code - as the existing technologies that LISA tests out of the box.

And coming to the benefits ,LISA provide in SOA testing,here is the brief synopsis.

SOA Platform Support enhancements provided by LISA include:

  • ESB Native Integrations: When you need to test an ESB, you should to be able to test every aspect of your integration layer - and LISA natively tests against every access point of ESB systems in ways that no other solution can, whether those are JMS messages, Web Services running on the bus, or connection databases. LISA provides out-of-the-box support for IBM MQ-Series, TIBCO, webMethods, Oracle FUSION, BEA AquaLogic, Sun JCAPS, Sonic MQ, Fiorano and other leading ESB/integration providers.
  • Governance Platform support: LISA tests provide an excellent way to check in SOA Tests as enforceable and verifiable SOA Policy example alongside services in the repository. Leading enterprise and public sector customers are using LISA as a quality certification platform to ensure trust across multiple services, and the divisions and teams that build, support, and leverage them. LISA supports both the process of Publishing services to a larger community with verifiable service levels, as well as Consuming services with well-defined requirements.
  • Virtual Endpoint Testing & Lookup: As SOA Governance practices evolve, the Registry/Repository is becoming a system of reference for flexibly building and managing the services that inhabit a loosely coupled environment. LISA interoperates with Type 2 and 3 UDDI registries and service repositories from leading providers such as CentraSite, Infravio, Systinet, BEA/Flashline, and others, providing a way to leverage them for lookup of the most recent services during test design, and dynamic "hookup" of services during test runs through the registry.
And it does provide support for RIA(Rich Internet Applications) testing, SOAP/WSDL testing,JMS testing to name few more benefits.

Having gone through the docs and some webinars myself,I became more curious to get to know about this product and Iam in the process of evaluating this great product by iTKO and I do feel most of the SOA developers/architects should have a look at this product and see how best it reduces the bottlenecks and decrease head aches for SOA initiation.It does have user forums and lots of white papers and documentation to make you feel at home and help you out in choosing best practices ,installation guide,faqs etc

Note : LISA is a java application and runs on a JRE 1.4.2.x or 1.5.x LISA has production customers on Windows, LINUX, HPUX, AIX, Solaris and OSX

Suggested Reading

LISA Webinars

Java Boutique's Article

Saturday, January 26, 2008

Importance of "Java Messaging Service" (JMS API) in Server Side Development

According to me Server-side programming & developing server-side components is the most challenging area for any Java Developer.Gone are the days for simpler client/server applications,and with emerging technologies like SOA,ESB(Enterprise Service Bus),BPM(Business Process Management) being considered by many large scale companies be it in financial sector or any other domain for their applications which are mainly n-tier applications,the efforts need to be put in designing and developing middle-tier components has increased and if you see ,most of the open source frameworks main focus is on Server-side programming,being a critical component in any n-tier application.And one API which has made my life easy in that regard is Java Messaging Service(JMS).

In my initial days(Iam talking about 9 years back) with Java,while working on different assignments I used different technologies like RMI,Corba and EJB for developing such applications and having seen the growth and emergence of different frameworks and APIs,I would have loved to start my IT career ,may be couple of years back but those struggling days have definitely made me sit up and realize the importance and advantages these new technologies are helping me out in design and development particularly in integration of enterprise applications in distributed computing arena with regards to developing highly scalable,reliable,loosely coupled messaging systems and providing asynchronous interactions
among J2EE components and legacy systems capable of messaging,compared to tightly coupled integration & legacy I was part of in my initial days.

Ok lets analyze the strides made by JMS API and the benefits & advantages it provides to any server side developer or architect.

What is Messaging?
Messaging is a method of communication between software components or applications.A messaging system is a peer-to-peer facility: A messaging client can send messages to, and receive messages from, any other client. Each client connects to a messaging agent that provides facilities for creating, sending, receiving, and reading messages. Messaging enables distributed communication that is loosely coupled. A component sends a message to a destination, and the recipient can retrieve the message from the destination. However, the sender and the receiver do not have to be available at the same time in order to communicate. In fact, the sender does not need to know anything about the receiver; nor does the receiver need to know anything
about the sender. The sender and the receiver need to know only what message format and what destination to use. In this respect, messaging differs from tightly coupled technologies, such as Remote Method Invocation (RMI) or RPC which require an application to know a remote application’s methods.

Enterprise messaging products (or as they are sometimes called, Message Oriented Middleware products) are becoming an essential component for integrating intra-company operations. They allow separate business components to be combined into a reliable, yet flexible, system.
In addition to the traditional MOM vendors, enterprise messaging products are also provided by several database vendors and a number of internet related companies.Java language clients and Java language middle tier services must be capable of using these messaging systems. JMS provides a common way for Java language programs to access these systems.the JMS API defines a common set of interfaces and associated semantics that allow programs written in the Java programming language to communicate with other messaging implementations.
And thats where JMS is useful and if you add Message Driven Beans(MDB) (combination of Session beans and JMS clients) ,introduced as part of EJB2.0 Specification,then you have an effective asynchronous messaging system in your middletier which covers areas like
security, concurrency, transaction etc.

The JMS API enables communication that is not only loosely coupled but also
• Asynchronous. A JMS provider can deliver messages to a client as they arrive;a client does not have to request messages in order to receive them.
• Reliable. The JMS API can ensure that a message is delivered once and only once. Lower levels of reliability are available for applications that can afford to miss messages or to receive duplicate messages.

Why do you need JMS ?
There are several reasons why you need JMS in a enterprise middleware application :
i)Decoupling : Different parts of an application can be developed such that they are not closely tied to each other
ii)Flexible Integration:Conversely loosely coupled systems can be put together by using MDB for wrapping existing systems.
iii)Separation of elements involved in messaging applications,for example a logging event can be separated from ,say retrieving a customer details in a bank application.
iv)Delivering low-level services of an application in offline-mode,meaning although the service must be provided, the main workflow doesnt have to wait for its completion,for example a logging service
v)Delivering same information to multiple-parties

vi)Most of the top J2EE application servers support messaging.Sun provides a list of JMS vendors at

Some myths & drawbacks about using JMS
1)Additional overhead in handling messages and messaging systems
2)Message-based system can be a single point of failure.

Different types of Messages
There are two main ways to send messages: point-to-point and publish-subscribe.
A point-to-point messaging(queue) application has a single producer (usually) and a single consumer. The producer produces messages while the consumer consumes them.A point-to-point system can actually have multiple producers,but usually only a single consumer. For example a print server,any machine on the network can send a print job to a particular print server.

The publish-subscribe messaging model (pub-sub) is more of a broadcast-oriented model. Publish-subscribe is based on the idea of topics and typically has many consumers-and potentially many producers as well.For example, subscription to any forum

Fundamentals in JMS API
The basic elements in JMS are administered objects(connection factories & destinations),connections,sessions,message producers,message consumers ,queues ,topics and messages.

A connection factory is the object a client uses to create a connection with a provider.A connection factory encapsulates a set of connection configuration parameters that has been defined by an administrator. A pair of connection factories come preconfigured with the J2EE SDK and are accessible as soon as you start the service.Each connection factory is an instance of either the QueueConnectionFactory or the TopicConnectionFactory interface.

A message producer is an object created by a session and is used for sending messages to a destination. The Point-to-Point form of a message producer implements the Queue-Sender interface. The pub/sub form implements the TopicPublisher interface.

A message consumer is an object created by a session and is used for receiving messages sent to a destination. A message consumer allows a JMS client to register interest in a destination with a JMS provider. The JMS provider manages the delivery of messages from a destination to the registered consumers of the destination.The PTP form of message consumer implements the QueueReceiver interface.The pub/sub form implements the TopicSubscriber interface.

Different JMS Message Types
The JMS API defines five message body formats, also called message types, which allow you to send and to receive data in many different forms and provide compatibility with existing messaging formats.
i)TextMessage : A java.lang.String object (for example, the contents of an Extensible Markup Language file).
ii)MapMessage : A set of name/value pairs, with names as String objects and values as primitive types in the Java programming language. The entries can be accessed sequentially by enumerator or randomly by name. The order of the entries is undefined.
iii)BytesMessage : A stream of uninterpreted bytes. This message type is for literally encoding a body to match an existing message format.
iv)StreamMessage: A stream of primitive values in the Java programming language,filled and read sequentially.
v)ObjectMessage :A Serializable object in the Java programming language.
vi)Message : Composed of header fields and properties only. This message type is useful when a message body is not required.

In order to use JMS, one must have a JMS provider that can manage the sessions and queues. There are free, open source and proprietary providers.

Articles about open source providers:

Articles about proprietary providers:

Suggested Reading

Importance of JMS
Benefits of JMS

Wednesday, January 23, 2008

Aspect Oriented Programming-Spring AOP

In continuation of the series of articles on Spring Framework,today lets explore the possibilities and solutions that can be achieved by using Aspect Oriented Programming,AOP and in particular Spring AOP.To start with I would like to delve upon core concepts related to AOP ,the myths and realities surrounding it,benefits over OOP(Object Oriented Programming) and the areas where it can be used effectively.

History of AOP?
object-oriented programming (OOP) introduced the concept of the object, which initiated a new way to structure applications and write programs. The same idea applies to the concept of the aspect.In 1996, Gregor Kiczales and his team at the Palo Alto Research Center (PARC), a subsidiary of Xerox Corporation that is located in California, originally defined the concept of the aspect.
Definition of aspect
An aspect is a common feature that's typically scattered across methods, classes, object hierarchies, or even entire object models. It is behavior that looks and smells like it should have structure, but you can't find a way to express this structure in code with traditional object-oriented techniques

The aim behind the development of OOP was to organize the data of an
application and its associated processing into coherent entities. Doing so was achieved by having objects that encapsulate data along with the methods that manipulate the data and carry out the processing.From a conceptual point of view, an application is broken down according to the realworld objects that it models. In a stock-management application, for example, you might find
supplier, article, customer, and other types of objects.By grouping together all the objects that possess the same characteristics, the concept of a class complements the concept of the object.
OOP has undeniably improved software engineering.Developers have built more-complex programs in a simpler fashion than would have been possible through procedural programming. Furthermore, developers have written large applications in object-oriented languages. For example, the Java 2 Platform, Enterprise Edition (J2EE) application servers were programmed in the Java language. Similarly, developers have implemented complex class hierarchies to construct graphical user interfaces. The Swing API,included in Java 2 Platform, Standard Edition (J2SE), falls into this category.AOP simply adds new concepts that allow you to improve object-oriented applications by making them more modular. In addition, AOP streamlines the development process by allowing the separation of development tasks. For example, highly technical functionalities, such as security.AOP allows us to dynamically modify our static model to include the code required to fulfill the secondary requirements without having to modify the original static model

Benefits over OOP

Writing clear and elegant programs using only OOP is impossible in at least two cases when the application contains crosscutting functionalities, and when the application includes code scattering.Let us explore a bit on these limitations.

Cross-cutting concerns
While organizing an application into classes,the analysis must be driven by the need for separating and encapsulating the data and its associated processing into coherent entities.Although the classes are programmed independently of one another, they are sometimes behaviorally interdependent. Typically, this is the case when you implement rules of referential integrity. For example, a customer object must not be deleted while an outstanding order remains unpaid; otherwise, the program risks losing the contact details for that customer. To enforce this rule, you could modify the customer-deletion method so that it initially determines whether all the orders have been paid. However, this solution is deficient for several reasons:• Determining whether an order has been paid does not belong to customer management but to order management. Therefore, the customer class should not have to manage this
functionality. The customer class should not need to be aware of all the data-integrity rules that other classes in the application impose. Modifying the customer class to take these data-integrity rules into account restricts the possibilities of reusing the class in other situations. In other words, once the customer class implements any functionality that is linked to a different class, customer is no longer independently reusable, in many cases. Despite the fact that the customer class is not the ideal place to implement this referential-integrity rule, many object-oriented programs work this way for lack of a better solution. You might be thinking about integrating this functionality into an order class instead,but this solution is no better. No reason exists for the order class to allow the deletion of a customer. Strictly speaking, this rule is linked to neither the customers nor the orders but cuts across these two types of entities.One of the aims of dividing the data into classes is making the classes independent from one another. However, crosscutting functionalities, such as the rules of referential integrity,appear superimposed on the division—violating the independence of the classes. In other words, OOP does not allow you to neatly implement crosscutting functionalities.

In OOP, the principal way that objects interact is by invoking methods. In other words, an object that needs to carry out an action invokes a method that belongs to another object. (An object can also invoke one of its own methods.) OOP always entail two roles: that of the invoker and that of the invoked.When you write the code to call a method, you do not need to worry about how the service is implemented because the call interacts only with the interface of the invoked object. You need only ensure that the parameters in the call correspond to those of the method’s signature.Because methods are implemented within classes, you write each method as a block of code that is clearly delimited. To change a method, you obviously modify the file that contains the class where the method is defined. If you alter just the body of the method, the modification is transparent because the method will still be called in exactly the same way.However, if you change the method’s signature (for example, by adding a parameter),further implications arise. You must then modify all the calls to the method, hence you must modify any classes that invoke the method. If these calls exist in several places in the program,
making the changes can be extremely time-consuming.The main point is this: Even though the implementation of a method is located in a single class, the calls to that method can be scattered throughout the application. This phenomenon of code scattering slows down maintenance tasks and makes it difficult for object-oriented applications to adapt and evolve.

Ofcourse we can use design patterns like Observer & Decorator in OOP to handle such scenerios like cross-cutting concerns and code scattering.And the advantages associated by implementing them are enhanced modularity of the code and effective testing.In general Aspects are used to implement functionalities (security, persistence, logging, and so on) within an application.An aspect allows you to integrate crosscutting functionalities and code scattering into an object-oriented application by using the new concepts of the pointcut, the joinpoint, and the advice.

Its the point of execution in the application at which cross-cutting concern needs to be applied.
In aspect-oriented computer programming, a pointcut is a set of join points. Whenever the program execution reaches one of the join points described in the pointcut, a piece of code associated with the pointcut (called advice) is executed. This allows a programmer to describe where and when additional code should be executed in addition to an already defined behavior. This permits the addition of aspects to existing software, or the design of software with a clear separation of concerns, wherein the programmer weaves (merges) different aspects into a complete application. In Spring the a pointcut is just a set of methods that, when called, should have advices invoked around them. This is the second important pieces of a Spring AOP aspect implementation!

Join Points
A join point is a point in the control flow of a program. In aspect-oriented programming a set of join points is described as a pointcut. A join point is where the main program and the aspect meet(such as field access, method invocation , constructor invocation, etc.). Spring's built-in AOP only supports method invocation currently.

In aspect and functional programming, advice describes a class of functions which modify other functions when the latter are run; it is a certain function, method or procedure that is to be applied at a given join point of a program.

Here is the example ,The aspect ibelow parodies the traditional "Hello World" for AspectJ by providing an aspect that captures all calls to the void foo(int, String) method in the MyClass class.

A simple HelloWorld aspect in AspectJ

public aspect HelloWorld
pointcut callPointcut( ) :
call(void foo(int, String));
before( ) : callPointcut( )
"Hello World");
"In the advice attached to the call pointcut");

OK having understood the history behind AOP and the terminology associated with it,lets explore the subject for today,Spring AOP

AOP basics

  • Aspect: A modularized implementation of a software concern that cuts across various objects in a software implementation. Logging is a good example of an aspect. In Spring AOP, aspects are nothing more than regular Spring beans, which themselves are plain-old Java objects (POJO) registered suitably with the Spring Inversion of Control container. The core advantage in using Spring AOP is its ability to realize the aspect as a plain Java class.

  • Join point: A point during program execution, such as a method executing or an exception being handled. In Spring AOP, a join point exclusively pertains to method execution only, which could be viewed as a limitation of Spring AOP. However, in reality, it is enough to handle most common cases of implementing crosscutting concerns.

  • Advice: Information about "when" an aspect is to be executed with respect to the join point. Examples of types of advice are "around," "before," and "after." They specify when the aspect's code will execute with respect to a particular join point.

  • Pointcut: A declarative condition that identifies the join points for consideration during an execution run. Pointcut is specified as an expression. Spring AOP uses the AspectJ pointcut expression syntax. An example pointcut expression is: execution(* com.myorg.springaop.examples.MyService*.*(..)). Asterisks in the expression refer to wildcards, as is conventiona

Spring AOP

The Spring Framework integrates with more powerful AOP frameworks, such as AspectJ .To use Spring AOP, you need to implement cross-cutting concerns and configure those concerns
in your applications.Any advice written for Spring AOP is configurable in the Spring container through a simple, consistent configuration. This configuration is an important aspect of using AOP in Spring because it is the only one you need to remember for creating extension points to existing classes.For further understanding I recommend reading the following articles ,instead of me touching upon the Spring AOP concepts here again,as anyways they are self-explanatory.

Spring AOP

Implementing Spring AOP in Enterprise Applications

Implementing Cross Cutting concerns using Spring 2.0 AOP

Implementing Logging as an Aspect using Spring AOP Framework

Suggested Video Tutorial by Ramnivas Laddad on AOP(Author of AspectJ in Action)

GoogleTech Talks Video by Gregor Kiczales,who introduced Aspects

Monday, January 21, 2008

Transaction Management using Spring Framework

In the previous article of this series related to Spring Framework,we discussed how easy it is to Unit Test the POJOs in Spring and as well little introduction on how effective is Spring in implementing Transaction Management .In continuation of that article,today we will explore more about configuring Transaction Management in Spring for different datasources

Transaction Management in Spring

Sometimes the fundamental error in judgement a Java Developer makes when dealing with transactions are using global transactions instead of local transactions.First let us understand these two types of transactions.

What is a transaction ?
A database transaction is a unit of interaction with a database management system or similar system that is treated in a coherent and reliable way independent of other transactions. In general, a database transaction must be atomic, meaning that it must be either entirely completed or aborted. Ideally, a database system will guarantee the properties of Atomicity, Consistency, Isolation and Durability (ACID) for each transaction.

Purpose of Transaction

In database products the ability to handle transactions allows the user to ensure that integrity of a database is maintained.

A single transaction might require several queries, each reading and/or writing information in the database. When this happens it is usually important to be sure that the database is not left with only some of the queries carried out. For example, when doing a money transfer, if the money was debited from one account, it is important that it also be credited to the depositing account. Also, transactions should not interfere with each other. For more information about desirable transaction properties, see ACID.

In order to reflect the correct state of reality in the system, a transaction should have the following properties.

  • Atomicity: This is the all-or-nothing property. Either the entire sequence of operations is successful or unsuccessful. A transaction should be treated as a single unit of operation. Completed transactions are only committed and incomplete transactions are rolled back or restored to the state where it started. There is absolutely no possibility of partial work being committed.
  • Consistency: A transaction maps one consistent state of the resources (e.g. database) to another. Consistency is concerned with correctly reflecting the reality of the state of the resources. Some of the concrete examples of consistency are referential integrity of the database, unique primary keys in tables etc.
  • Isolation: A transaction should not reveal its results to other concurrent transactions before it commits. Isolation assures that transactions do not access data that is being concurrently updated. The other name for isolation is serialization.
  • Durability: Results of completed transactions have to be made permanent and cannot be erased from the database due to system failure. Resource managers ensure that the results of a transaction are not altered due to system failures.

A simple transaction is usually issued to the database system in a language like SQL in this form:

  1. Begin the transaction
  2. Execute several queries (although any updates to the database aren't actually visible to the outside world yet)
  3. Commit the transaction (updates become visible if the transaction is successful)

If one of the queries fails the database system may rollback either the entire transaction or just the failed query. This behaviour is dependent on the DBMS in use and how it is set up. The transaction can also be rolled back manually at any time before the commit.

Local vs Global Transactions
Local transactions are specific to a single transactional resource (a JDBC connection, for example), whereas global transactions are managed by the container and can span multiple transactional resources.

Unlike a centralized computing environment where application components and resources are located at a single site, and transaction management only involves a local data manager running on a single machine, in a distributed computing environment all the resources are distributed across multiple systems. In such a case transaction management needs to be done both at local and global levels. A local transaction is one which involves activities in a single local resource manager. A distributed or a global transaction is executed across multiple systems, and its execution requires coordination between the global transaction management system and all the local data managers of all the involved systems. The Resource Manager and Transaction Manager (TM), also known as a transaction processing monitor (TP monitor), are the two primary elements of any transactional system. In centralized systems, both the TP monitor and the resource manager are integrated into the DBMS server. To support advanced functionalities required in a distributed component-based system, separation of TP monitor from the resource managers is required.

Local transactions are easy to manage, and because most operations work with just one transactional resource (such as a JDBC transaction), using local transactions is enough. However, if you are not using Spring, you still have a lot of transaction management code to write, and if in the future the scope of the transaction needs to be extended across multiple transactional resources, you have to drop the local transaction management code and rewrite it to use global transactions.

A global or distributed transaction consists of several subtransactions and is treated as a single recoverable atomic unit. The global transaction manager is responsible for managing distributed transactions by coordinating with different resource managers to access data at several different systems. Since multiple application components and resources participate in a transaction, it's necessary for the transaction manager to establish and maintain the state of the transaction as it occurs.Global transactions in non-Spring application are, in most cases, coded using JTA, which is a complex API that depends on JNDI. This means that you have to use a J2EE application server

Two-Phase Commit (2PC) Protocol

The two-phase commit protocol enables the Atomicity in a distributed transaction scenario. The system module responsible for this protocol is usually called a transaction manager or a coordinator. As the name implies, there are two phases to the protocol. In the first phase, the coordinator asks each participant to vote on a commit or a rollback. This is accomplished by sending a so-called prepare request to each participant. When a participant votes for a commit, it loses its right to roll back independently, meaning that it has to wait for the final outcome received from the coordinator. The first phase ends when the coordinator receives all votes or if a timeout occurs. The second phase starts with the final decision made by the coordinator. In the case of a timeout or at least one "rollback" vote, the decision to roll back is sent to each participant that voted for "commit" in the first phase. As a result, all data modifications at all places involved are rolled back. Should all participants vote to commit, then and only then, the coordinator decides to perform a global commit and sends a commit notification to all participants. Consequently, all the work at all places is committed.

The complexity of the two-phase commit relates not only to the distributed nature of a transaction, but also to a possible non-atomic outcome of a transaction, i.e. heuristics. For example, the first participant may commit changes during phase two, while the second participant encounters a hardware failure when saving changes to the disk. Being able to roll back or at least notify the errors to recover the system into the original state is an important part of the process.

By persisting intermediate steps of the 2PC, that is, logging abort, ready to commit, and commit messages, the protocol provides a certain degree of reliability in case the coordinator or participants fail in the midst of transaction processing. The two-phase commit protocol can be implemented in a synchronous or asynchronous manner with variations to its actual execution.

Programmatic vs Declarative Transactions
The Java EE container implements support for transactions and facilitates the ACID properties required by the application logic. The container provides an implementation for the two-phase commit protocol between a transaction manager and underlying resources such as the database or messaging provider. The Java EE container is also responsible for the transaction context propagation and provides support for a distributed two-phase commit. With a distributed two-phase commit, a Java EE application can modify data across multiple application servers as if it is a single transaction.

The decision for whether to use programmatic or declarative transaction support depends on the level of transaction control and complexity required by the application design. With the declarative transaction support boundaries and individual properties of a transaction are specified in a deployment descriptor . With programmatic support to a transaction application logic encapsulates transactional characteristics in the code. A POJO object has to use the programmatic transaction demarcation. And because of Spring's Inversion of Control and dependency injection CMT is now also available for POJO applications.

Container-Managed Transactions(EJB 2.1) vs. Spring’s Declarative Transaction management(Spring 1.2):

EJB’s Transaction attribute:

Spring’s Progagation behavior:
Interface org.springframework.transaction.TransactionDefinition

EJB’s Isolation level:
Interface java.sql.Connection

Spring’s Isolation level:
Interface org.springframework.transaction.TransactionDefinition

Rolling Back a Container-Managed Transaction
There are two ways to roll back a container-managed transaction.
First, if a system exception is thrown, the container will automatically roll back the transaction. Second, by invoking the setRollbackOnly method of the EJBContext interface, the bean method instructs the container to roll back the transaction. If the bean throws an application exception, the rollback is not automatic but can be initiated by a call to setRollbackOnly.

Spring’s Roll back rules
Transaction can be declared to roll back or not based on exceptions that are thrown during the course of the transaction.
By default, transactions are rolled back only on runtime exceptions and not on checked exceptions.

Bean-Managed Transactions(EJB 2.1) vs. Spring’s Programmatic Transaction management(Spring 1.2):
In a bean-managed transaction, the code in the session or message-driven bean explicitly marks the boundaries of the transaction. An entity bean cannot have bean-managed transactions; it must use container-managed transactions instead.
Although beans with container-managed transactions require less coding, they have one limitation: When a method is executing, it can be associated with either a single transaction or no transaction at all. If this limitation will make coding your bean difficult, you should consider using bean-managed transactions.

Spring provides two means of programmatic transaction management:
Using the TransactionTemplate
Using a PlatformTransactionManager implementation directly

Configuration of Deployment Descriptor-Spring vs EJB


<!– A minimal session EJB deployment –>
<!– or Stateless –>

<!– OPTIONAL, can be many. How the container is to manage
transactions when calling anEJB’s business methods –>
<!– Can specify many methods at once here –>
<!– NotSupported|Supports|Required|RequiresNew|Mandatory|Never –>


Spring applicationcontext.xml

<bean id=”petStore”
<property name=”transactionManager” ref=”txManager”/>
<property name=”target” ref=”petStoreTarget”/>
<property name=”transactionAttributes”>
<prop key=”insert*”>
<prop key=”update*”>
<prop key=”*”>

The following are the key differences from EJB CMT (from Introduce the Spring Framework)
a. Transaction management can be applied to any POJO. We recommend that business objects implement interfaces, but this is a matter of good programming practice, and is not enforced by the framework.

b. Programmatic rollback can be achieved within a transactional POJO through using the Spring transaction API. We provide static methods for this, using ThreadLocal variables, so you don’t need to propagate a context object such as an EJBContext to ensure rollback.

c. You can define rollback rules declaratively. Whereas EJB will not automatically roll back a transaction on an uncaught application exception (only on unchecked exceptions, other types of Throwable and “system” exceptions), application developers often want a transaction to roll back on any exception. Spring transaction management allows you to specify declaratively which exceptions and subclasses should cause automatic rollback. Default behaviour is as with EJB, but you can specify automatic rollback on checked, as well as unchecked exceptions. This has the important benefit of minimizing the need for programmatic rollback, which creates a dependence on the Spring transaction API (as EJB programmatic rollback does on the EJBContext).

d. Because the underlying Spring transaction abstraction supports savepoints if they are supported by the underlying transaction infrastructure, Spring’s declarative transaction management can support nested transactions, in addition to the propagation modes specified by EJB CMT (which Spring supports with identical semantics to EJB). Thus, for example, if you have doing JDBC operations on Oracle, you can use declarative nested transactions using Spring.

You cannot control the atomicity, consistency, and durability of a transaction, but you can control the transaction propagation and timeout, which set whether the transaction should be read-only and specify the isolation level.

Spring encapsulates all these settings in a TransactionDefinition interface. This interface is used in the core interface of the transaction support in Spring, the PlatfromTransactionManager, whose implementations perform transaction management on a specific platform, such as JDBC or JTA. The core method, PlatformTransactionManager.getTransaction(), returns a TransactionStatus interface, which is used to control the transaction execution, more specifically to set the transaction result and to check whether the transaction is read-only or whether it is a new transaction.

Exploring the TransactionDefinition Interface

As we mentioned earlier, the TransactionDefinition interface controls the properties of a transaction.

package org.springframework.transaction;
import java.sql.Connection;
public interface TransactionDefinition {
int getPropagationBehavior();
int getIsolationLevel();
int getTimeout();
boolean isReadOnly();

Using the TransactionStatus Interface

This interface allows a transactional manager to control the transaction execution. The code can check whether the transaction is a new one, or whether it is a read- only transaction and it can initiate a rollback.

public interface TransactionStatus {
boolean isNewTransaction();
void setRollbackOnly();
boolean isRollbackOnly();

Implementations of the PlatformTransactionManager

This is an interface that uses the TransactionDefinition and TransactionStatus interfaces to create and manage transactions. The actual implementations of this interface must have detailed knowledge of the transaction manager. The DataSourceTransactionManager controls transactions performed within a DataSource; HibernateTransactionManager controls transactions performed on a Hibernate session, JdoTransactionManager manages JDO transactions, and JtaTransactionManager delegates transaction management to JTA

Configuring Spring’s Transaction Manager for JDBC
To set up transaction management for your applications, you need to configure the transaction
manager of your choice. The simplest way to start is to use the DataSourceTransactionManager. It’s suitable when working with JDBC or iBATIS.

Configuring DataSourceTransactionManagement

<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource"
<property name="driverClassName" value="${jdbc.driverClassName}"/>
<property name="url" value="${jdbc.url}"/>
<property name="username" value="${jdbc.username}"/>
<property name="password" value="${jdbc.password}"/>
<bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="location" value=""/>
<bean id="transactionManager"
class="org.springframework.jdbc.datasource. DataSourceTransactionManager">
<property name="dataSource" ref="dataSource"/>

In the most straightforward scenario, DataSourceTransactionManager will obtain a new Connection object from the DataSource object and bind it to the current thread when a
transaction starts. It will remove the Connection object from the current thread when the transaction ends and commit or roll back the active transaction, as necessary, and close the Connection object.
Configuring Spring’s Transaction Manager for JTA
An alternative transaction-management strategy is to use a JTA transaction manager. All application servers come with such a transaction manager, although some stand-alone implementations exist.You don’t automatically need to use JTA when deploying applications in an application server.Nothing stops you from using DataSourceTransactionManager, which gives you the advantage of more independence from the deployment environment.
However, in a minority of cases, you want to delegate transaction management to the JTA
transaction manager of your application server. The most common reason for this is to work with distributed transactions.

Setting Up Transaction Management via JTA in Spring
<bean id="dataSource" class="org.springframework.jndi.JndiObjectFactoryBean">
<property name="jndiName" value="java:env/jdbc/myDataSource"/>
<bean id="transactionManager" class="org.springframework.jta.JtaTransactionManager"/>

When working with the JTA transaction manager of an application server, you must use a
DataSource object that was obtained via JNDI from the same application server. JtaTransactionManager needs no special configuration. This is because the JTA transaction
manager in the application server will automatically start and end the transactions on
Connection objects that were obtained from the JNDI DataSource object.

Bringing Advanced Transaction Capabilities to Spring Applications
This article discusses Spring's transaction management facilities and the common use cases in Spring where an external transaction manager is required. A real-world application is used to illustrate the transactional aspects and features. The focus is on leveraging JTA transaction management in the Spring Framework for enterprise applications. The article shows how Spring's transaction services can seamlessly expose and interact with a Java EE application server's transaction manager such as the Oracle Application Server and the OC4JJtaTransactionManager.

Coming up Next in this series related to "Spring Framework" is, Aspect Oriented Programming(AOP).

Suggested Reading :

Pro Spring Book preview

Suggested Video Tutorial

Mission Critical Transaction Management using Spring